url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.16B
1.34B
node_id
stringlengths
18
19
number
int64
3.81k
4.82k
title
stringlengths
1
162
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
int64
1,646B
1,660B
updated_at
int64
1,646B
1,660B
closed_at
int64
1,646B
1,660B
โŒ€
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
9
19.5k
โŒ€
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4522/comments
https://api.github.com/repos/huggingface/datasets/issues/4522/events
https://github.com/huggingface/datasets/issues/4522
1,274,929,328
I_kwDODunzps5L_eCw
4,522
Try to reduce the number of datasets that require manual download
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,655,466,123,000
1,655,466,768,000
null
CONTRIBUTOR
null
> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to โ‰ˆ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4522/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4521/comments
https://api.github.com/repos/huggingface/datasets/issues/4521/events
https://github.com/huggingface/datasets/issues/4521
1,274,919,437
I_kwDODunzps5L_boN
4,521
Datasets method `.map` not hashing
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Fix posted: https://github.com/huggingface/datasets/issues/4506#issuecomment-1157417219", "Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps://stackoverflow.com/questions/72664827/can-pickle-dill-foo-but-not-lambda-x-foox", "Thank @nalzok . That works for me:\r\n\r\n`pip install \"dill<0.3.5\"`" ]
1,655,465,470,000
1,659,614,896,000
1,656,422,585,000
CONTRIBUTOR
null
## Describe the bug Datasets method `.map` not hashing, even with an empty no-op function ## Steps to reproduce the bug ```python from datasets import load_dataset # download 9MB dummy dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean") def prepare_dataset(batch): return(batch) ds = ds.map( prepare_dataset, num_proc=1, desc="preprocess train dataset", ) ``` ## Expected results Hashed and cached dataset preprocessing ## Actual results Does not hash properly: ``` Parameter 'function'=<function prepare_dataset at 0x7fccb68e9280> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4521/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4520/comments
https://api.github.com/repos/huggingface/datasets/issues/4520/events
https://github.com/huggingface/datasets/issues/4520
1,274,879,180
I_kwDODunzps5L_RzM
4,520
Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I think this has been fixed by #4516, let me know if you encounter this again :)\r\n\r\nI re-ran your code in 3.7 and 3.9 and it works fine", "Thank you!" ]
1,655,462,837,000
1,656,427,637,000
1,656,425,069,000
CONTRIBUTOR
null
Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since dataclasses cannot be hashed, one has to define separate variables prior to passing dataclass attributes to the `.map` method: ```python phoneme_language = data_args.phoneme_language ``` in the example https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L603-L630 ## Steps to reproduce the bug ```python from dataclasses import dataclass, field from datasets.fingerprint import Hasher @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ phoneme_language: str = field( default=None, metadata={"help": "The name of the phoneme language to use."} ) data_args = DataTrainingArguments(phoneme_language ="foo") Hasher.hash(data_args) phoneme_language = data_args.phoneme_language Hasher.hash(phoneme_language) ``` ## Expected results A hash. ## Actual results <details> <summary> Traceback </summary> ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Input In [1], in <cell line: 16>() 10 phoneme_language: str = field( 11 default=None, metadata={"help": "The name of the phoneme language to use."} 12 ) 14 data_args = DataTrainingArguments(phoneme_language ="foo") ---> 16 Hasher.hash(data_args) 18 phoneme_language = data_args. phoneme_language 20 Hasher.hash(phoneme_language) File ~/datasets/src/datasets/fingerprint.py:237, in Hasher.hash(cls, value) 235 return cls.dispatch[type(value)](cls, value) 236 else: --> 237 return cls.hash_default(value) File ~/datasets/src/datasets/fingerprint.py:230, in Hasher.hash_default(cls, value) 228 @classmethod 229 def hash_default(cls, value: Any) -> str: --> 230 return cls.hash_bytes(dumps(value)) File ~/datasets/src/datasets/utils/py_utils.py:564, in dumps(obj) 562 file = StringIO() 563 with _no_cache_fields(obj): --> 564 dump(obj, file) 565 return file.getvalue() File ~/datasets/src/datasets/utils/py_utils.py:539, in dump(obj, file) 537 def dump(obj, file): 538 """pickle an object to a file""" --> 539 Pickler(file, recurse=True).dump(obj) 540 return File ~/hf/lib/python3.8/site-packages/dill/_dill.py:620, in Pickler.dump(self, obj) 618 raise PicklingError(msg) 619 else: --> 620 StockPickler.dump(self, obj) 621 return File /usr/lib/python3.8/pickle.py:487, in _Pickler.dump(self, obj) 485 if self.proto >= 4: 486 self.framer.start_framing() --> 487 self.save(obj) 488 self.write(STOP) 489 self.framer.end_framing() File /usr/lib/python3.8/pickle.py:603, in _Pickler.save(self, obj, save_persistent_id) 599 raise PicklingError("Tuple returned by %s must have " 600 "two to six elements" % reduce) 602 # Save the reduce() output and finally memoize the object --> 603 self.save_reduce(obj=obj, *rv) File /usr/lib/python3.8/pickle.py:687, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 684 raise PicklingError( 685 "args[0] from __newobj__ args has the wrong class") 686 args = args[1:] --> 687 save(cls) 688 save(args) 689 write(NEWOBJ) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1838, in save_type(pickler, obj, postproc_list) 1836 postproc_list = [] 1837 postproc_list.append((setattr, (obj, '__qualname__', obj_name))) -> 1838 _save_with_postproc(pickler, (_create_type, ( 1839 type(obj), obj.__name__, obj.__bases__, _dict 1840 )), obj=obj, postproc_list=postproc_list) 1841 log.info("# %s" % _t) 1842 else: File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1140, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1137 pickler._postproc[id(obj)] = postproc_list 1139 # TODO: Use state_setter in Python 3.8 to allow for faster cPickle implementations -> 1140 pickler.save_reduce(*reduction, obj=obj) 1142 if is_pickler_dill: 1143 # pickler.x -= 1 1144 # print(pickler.x*' ', 'pop', obj, id(obj)) 1145 postproc = pickler._postproc.pop(id(obj)) File /usr/lib/python3.8/pickle.py:692, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 690 else: 691 save(func) --> 692 save(args) 693 write(REDUCE) 695 if obj is not None: 696 # If the object is already in the memo, this means it is 697 # recursive. In this case, throw away everything we put on the 698 # stack, and fetch the object back from the memo. File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File /usr/lib/python3.8/pickle.py:901, in _Pickler.save_tuple(self, obj) 899 write(MARK) 900 for element in obj: --> 901 save(element) 903 if id(obj) in memo: 904 # Subtle. d was not in memo when we entered save_tuple(), so 905 # the process of saving the tuple's elements must have saved (...) 909 # could have been done in the "for element" loop instead, but 910 # recursive tuples are a rare thing. 911 get = self.get(memo[id(obj)][0]) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1251, in save_module_dict(pickler, obj) 1248 if is_dill(pickler, child=False) and pickler._session: 1249 # we only care about session the first pass thru 1250 pickler._first_pass = False -> 1251 StockPickler.save_dict(pickler, obj) 1252 log.info("# D2") 1253 return File /usr/lib/python3.8/pickle.py:971, in _Pickler.save_dict(self, obj) 968 self.write(MARK + DICT) 970 self.memoize(obj) --> 971 self._batch_setitems(obj.items()) File /usr/lib/python3.8/pickle.py:997, in _Pickler._batch_setitems(self, items) 995 for k, v in tmp: 996 save(k) --> 997 save(v) 998 write(SETITEMS) 999 elif n: File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/datasets/src/datasets/utils/py_utils.py:862, in save_function(pickler, obj) 859 if state_dict: 860 state = state, state_dict --> 862 dill._dill._save_with_postproc( 863 pickler, 864 ( 865 dill._dill._create_function, 866 (obj.__code__, globs, obj.__name__, obj.__defaults__, closure), 867 state, 868 ), 869 obj=obj, 870 postproc_list=postproc_list, 871 ) 872 else: 873 closure = obj.func_closure File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1153, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1151 dest, source = reduction[1] 1152 if source: -> 1153 pickler.write(pickler.get(pickler.memo[id(dest)][0])) 1154 pickler._batch_setitems(iter(source.items())) 1155 else: 1156 # Updating with an empty dictionary. Same as doing nothing. KeyError: 140434581781568 ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4520/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4519/comments
https://api.github.com/repos/huggingface/datasets/issues/4519/events
https://github.com/huggingface/datasets/pull/4519
1,274,110,623
PR_kwDODunzps45zhqa
4,519
Create new sections for audio and vision in guides
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ready for review!\r\n\r\nThe `toctree` is a bit longer now with the sections. I think if we keep the audio/vision/text/dataset repository sections collapsed by default, and keep the general usage expanded, it may look a little cleaner and not as overwhelming. Let me know what you think! ๐Ÿ˜„ " ]
1,655,415,504,000
1,657,208,197,000
1,657,207,498,000
MEMBER
null
This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - while keeping our docs information architecture. Some other changes include: - ~Experimented with decorating text with some CSS to highlight guides specific to each modality. Hopefully, it'll be easier for users to find and realize that these different docs exist!~ Will experiment with this in a different PR. - Added deprecation warning for Metrics and redirect to Evaluate. - Updated `set_format` section to recommend using the new `to_tf_dataset` function if you need to convert to a TensorFlow dataset. - Reorganized `toctree` to nest general usage, audio, vision, and text sections under the how-to guides. - A quick review and edit to the Load and Process docs for clarity.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4519/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4519", "html_url": "https://github.com/huggingface/datasets/pull/4519", "diff_url": "https://github.com/huggingface/datasets/pull/4519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4519.patch", "merged_at": 1657207498000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4518/comments
https://api.github.com/repos/huggingface/datasets/issues/4518/events
https://github.com/huggingface/datasets/pull/4518
1,274,010,628
PR_kwDODunzps45zMnB
4,518
Patch tests for hfh v0.8.0
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,408,732,000
1,655,482,557,000
1,655,481,967,000
MEMBER
null
This PR patches testing utilities that would otherwise fail with hfh v0.8.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4518/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4518", "html_url": "https://github.com/huggingface/datasets/pull/4518", "diff_url": "https://github.com/huggingface/datasets/pull/4518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4518.patch", "merged_at": 1655481967000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4517/comments
https://api.github.com/repos/huggingface/datasets/issues/4517/events
https://github.com/huggingface/datasets/pull/4517
1,273,960,476
PR_kwDODunzps45zBl0
4,517
Add tags for task_ids:summarization-* and task_categories:summarization*
{ "login": "hobson", "id": 292855, "node_id": "MDQ6VXNlcjI5Mjg1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/292855?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hobson", "html_url": "https://github.com/hobson", "followers_url": "https://api.github.com/users/hobson/followers", "following_url": "https://api.github.com/users/hobson/following{/other_user}", "gists_url": "https://api.github.com/users/hobson/gists{/gist_id}", "starred_url": "https://api.github.com/users/hobson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hobson/subscriptions", "organizations_url": "https://api.github.com/users/hobson/orgs", "repos_url": "https://api.github.com/users/hobson/repos", "events_url": "https://api.github.com/users/hobson/events{/privacy}", "received_events_url": "https://api.github.com/users/hobson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Associated community discussion is [here](https://huggingface.co/datasets/aeslc/discussions/1).\r\nPaper referenced in the `dataset_infos.json` is [here](https://arxiv.org/pdf/1906.03497.pdf). It mentions the _email-subject-generation_ task, which is not a tag mentioned in any other dataset so it was not added in this pull request. The _summarization_ task is mentioned as a related task.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,405,545,000
1,657,293,263,000
1,657,292,551,000
CONTRIBUTOR
null
yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json separate Pull Request will modify dataset_infos.json to add these tags The Enron dataset (dataset id aeslc) is only tagged with: arxiv:1906.03497' languages:en pretty_name:AESLC Using the email subject_line field as a label or target variable it possible to create models for the following task_ids (in order of relevance): 'task_ids:summarization' 'task_ids:summarization-other-conversations-summarization' "task_ids:other-other-query-based-multi-document-summarization" 'task_ids:summarization-other-aspect-based-summarization' 'task_ids:summarization--other-headline-generation' The subject might also be used for the task_category "task_categories:summarization" E-mail chains might be used for the task category "task_categories:dialogue-system"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4517/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4517", "html_url": "https://github.com/huggingface/datasets/pull/4517", "diff_url": "https://github.com/huggingface/datasets/pull/4517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4517.patch", "merged_at": 1657292551000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4516/comments
https://api.github.com/repos/huggingface/datasets/issues/4516/events
https://github.com/huggingface/datasets/pull/4516
1,273,825,640
PR_kwDODunzps45ykYX
4,516
Fix hashing for python 3.9
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "What do you think @albertvillanova ?" ]
1,655,397,751,000
1,656,423,226,000
1,656,422,586,000
MEMBER
null
In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function. Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9 To make hashing deterministic when the globals are not in the same order, we also need to make the order of `glob_ids` deterministic. Right now we don't have a CI to test python 3.9 but we should definitely have one. For this PR in particular I ran the tests locally using python 3.9 and they're passing now. Fix https://github.com/huggingface/datasets/issues/4506
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4516/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4516/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4516", "html_url": "https://github.com/huggingface/datasets/pull/4516", "diff_url": "https://github.com/huggingface/datasets/pull/4516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4516.patch", "merged_at": 1656422585000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4515/comments
https://api.github.com/repos/huggingface/datasets/issues/4515/events
https://github.com/huggingface/datasets/pull/4515
1,273,626,131
PR_kwDODunzps45x5mB
4,515
Add uppercased versions of image file extensions for automatic module inference
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,388,889,000
1,655,400,113,000
1,655,399,501,000
CONTRIBUTOR
null
Adds the uppercased versions of the image file extensions to the supported extensions. Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision domain) Note that there is a slight discrepancy between the image file resolution and `imagefolder` as the latter calls `.lower()` on file extensions leading to some image file extensions being ignored by the resolution but not by the loader (e.g. `pNg`). Such extensions should also be discouraged, so I'm ignoring that case too. Fix #4514.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4515/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4515", "html_url": "https://github.com/huggingface/datasets/pull/4515", "diff_url": "https://github.com/huggingface/datasets/pull/4515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4515.patch", "merged_at": 1655399500000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4514/comments
https://api.github.com/repos/huggingface/datasets/issues/4514/events
https://github.com/huggingface/datasets/issues/4514
1,273,505,230
I_kwDODunzps5L6CXO
4,514
Allow .JPEG as a file extension
{ "login": "DiGyt", "id": 34550289, "node_id": "MDQ6VXNlcjM0NTUwMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/34550289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DiGyt", "html_url": "https://github.com/DiGyt", "followers_url": "https://api.github.com/users/DiGyt/followers", "following_url": "https://api.github.com/users/DiGyt/following{/other_user}", "gists_url": "https://api.github.com/users/DiGyt/gists{/gist_id}", "starred_url": "https://api.github.com/users/DiGyt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DiGyt/subscriptions", "organizations_url": "https://api.github.com/users/DiGyt/orgs", "repos_url": "https://api.github.com/users/DiGyt/repos", "events_url": "https://api.github.com/users/DiGyt/events{/privacy}", "received_events_url": "https://api.github.com/users/DiGyt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, thanks for reporting! I've opened a PR with the fix.", "Wow, that was quick! Thank you very much ๐Ÿ™ " ]
1,655,382,980,000
1,655,713,126,000
1,655,399,500,000
NONE
null
## Describe the bug When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed. ## Steps to reproduce the bug ```python # use bash to create 2 sham datasets with jpeg and JPEG ext !mkdir dataset_a !mkdir dataset_b !wget https://upload.wikimedia.org/wikipedia/commons/7/71/Dsc_%28179253513%29.jpeg -O example_img.jpeg !cp example_img.jpeg ./dataset_a/ !mv example_img.jpeg ./dataset_b/example_img.JPEG from datasets import load_dataset # working df1 = load_dataset("./dataset_a", ignore_verifications=True) #not working df2 = load_dataset("./dataset_b", ignore_verifications=True) # show print(df1, df2) ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) ``` ## Actual results ``` FileNotFoundError: Unable to resolve any data file that matches '['**']' at /..PATH../dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` I know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4514/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4513/comments
https://api.github.com/repos/huggingface/datasets/issues/4513/events
https://github.com/huggingface/datasets/pull/4513
1,273,450,338
PR_kwDODunzps45xTqv
4,513
Update Google Cloud Storage documentation and add Azure Blob Storage example
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should remove the `>>>` in the Python blocks before the in-line code comments or keep them.\r\n\r\n![image](https://user-images.githubusercontent.com/36760800/174254663-b68d28d2-eae1-40f3-8695-dc4b0c3b479a.png)\r\n", "Comments are ignored by doctest, so I think we can remove the `>>>` :)", "Cool I'll remove those now ๐Ÿ‘๐Ÿป", "Sure @lhoestq, I just kept that structure as that was the more similar one to the one that was already there, but we can go with that approach, just let me know whether I should change the headers so as to leave all those providers in the same level (`h2`). Thanks!" ]
1,655,379,969,000
1,656,003,911,000
1,656,003,299,000
CONTRIBUTOR
null
While I was going through the ๐Ÿค— Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code comment was mentioning "s3 bucket" instead of "gcs bucket", and some more in-line comments could be included. Also, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named "Other filesystems", with an h3 for "Google Cloud Storage". Besides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https://github.com/fsspec/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems. And took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers. Let me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4513/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4513", "html_url": "https://github.com/huggingface/datasets/pull/4513", "diff_url": "https://github.com/huggingface/datasets/pull/4513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4513.patch", "merged_at": 1656003299000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4512/comments
https://api.github.com/repos/huggingface/datasets/issues/4512/events
https://github.com/huggingface/datasets/pull/4512
1,273,378,129
PR_kwDODunzps45xEDN
4,512
Add links to vision tasks scripts in ADD_NEW_DATASET template
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI failure is unrelated to the PR's changes. Merging." ]
1,655,375,735,000
1,657,289,270,000
1,657,288,583,000
CONTRIBUTOR
null
Add links to vision dataset scripts in the ADD_NEW_DATASET template.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4512/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4512", "html_url": "https://github.com/huggingface/datasets/pull/4512", "diff_url": "https://github.com/huggingface/datasets/pull/4512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4512.patch", "merged_at": 1657288583000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4511/comments
https://api.github.com/repos/huggingface/datasets/issues/4511/events
https://github.com/huggingface/datasets/pull/4511
1,273,336,874
PR_kwDODunzps45w7RN
4,511
Support all negative values in ClassLabel
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for this fix! I'm not sure what the release timeline is, but FYI #4508 is a breaking issue for transformer token classification using Trainer and PyTorch. PyTorch defaults to -100 as the ignored label for [negative log loss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html?highlight=nllloss#torch.nn.NLLLoss), so switching labels to -1 leads to index errors using Trainer defaults.\r\n\r\nAs a workaround, I'm using master branch directly (`pip install git+https://github.com/huggingface/datasets.git@master` for anyone who needs to do the same) until this gets released.", "The new release `2.4` fixes the issue, feel free to update `datasets` :) \r\n```\r\npip install -U datasets\r\n```" ]
1,655,373,579,000
1,659,024,207,000
1,655,387,647,000
MEMBER
null
We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3 Fix https://github.com/huggingface/datasets/issues/4508
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4511/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4511", "html_url": "https://github.com/huggingface/datasets/pull/4511", "diff_url": "https://github.com/huggingface/datasets/pull/4511.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4511.patch", "merged_at": 1655387647000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4510/comments
https://api.github.com/repos/huggingface/datasets/issues/4510/events
https://github.com/huggingface/datasets/pull/4510
1,273,260,396
PR_kwDODunzps45wq6o
4,510
Add regression test for `ArrowWriter.write_batch` when batch is empty
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value." ]
1,655,369,631,000
1,655,383,082,000
1,655,382,499,000
CONTRIBUTOR
null
As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling properly `writer.write_batch({})` as an error is triggered. Also, if we add a regression test in `test_arrow_writer.py::test_write_batch` before applying the fix, the test will fail as when trying to write an empty batch as follows: ``` =================================================================================== short test summary info =================================================================================== FAILED tests/test_arrow_writer.py::test_write_batch[None-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-10] - ValueError: Schema and number of arrays unequal ======================================================================== 9 failed, 73 deselected, 7 warnings in 0.81s ========================================================================= ``` So the batch is not ignored when empty, as `batch_examples={}` won't match the condition `if batch_examples: ...`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4510/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4510", "html_url": "https://github.com/huggingface/datasets/pull/4510", "diff_url": "https://github.com/huggingface/datasets/pull/4510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4510.patch", "merged_at": 1655382499000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4509/comments
https://api.github.com/repos/huggingface/datasets/issues/4509/events
https://github.com/huggingface/datasets/pull/4509
1,273,227,760
PR_kwDODunzps45wkDl
4,509
Support skipping Parquet to Arrow conversion when using Beam
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4509). All of your documentation changes will be reflected on that endpoint.", "When #4724 is merged, we can just pass `file_format=\"parquet\"` to `download_and_prepare` and it will output parquet fiels without converting to arrow" ]
1,655,367,938,000
1,658,937,800,000
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4509/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4509", "html_url": "https://github.com/huggingface/datasets/pull/4509", "diff_url": "https://github.com/huggingface/datasets/pull/4509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4509.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4508/comments
https://api.github.com/repos/huggingface/datasets/issues/4508/events
https://github.com/huggingface/datasets/issues/4508
1,272,718,921
I_kwDODunzps5L3CZJ
4,508
cast_storage method from datasets.features
{ "login": "romainremyb", "id": 67968596, "node_id": "MDQ6VXNlcjY3OTY4NTk2", "avatar_url": "https://avatars.githubusercontent.com/u/67968596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/romainremyb", "html_url": "https://github.com/romainremyb", "followers_url": "https://api.github.com/users/romainremyb/followers", "following_url": "https://api.github.com/users/romainremyb/following{/other_user}", "gists_url": "https://api.github.com/users/romainremyb/gists{/gist_id}", "starred_url": "https://api.github.com/users/romainremyb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/romainremyb/subscriptions", "organizations_url": "https://api.github.com/users/romainremyb/orgs", "repos_url": "https://api.github.com/users/romainremyb/repos", "events_url": "https://api.github.com/users/romainremyb/events{/privacy}", "received_events_url": "https://api.github.com/users/romainremyb/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ", "I'm fine with re-adding support for all negative values for unknown/missing labels @mariosasko, wdyt ?" ]
1,655,326,042,000
1,655,387,647,000
1,655,387,647,000
NONE
null
## Describe the bug A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets. ## Steps to reproduce the bug Steps are: - load whatever datset - write a preprocessing function such as "tokenize_and_align_labels" written in https://huggingface.co/docs/transformers/tasks/token_classification - map the function on dataset and get "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features # Sample code to reproduce the bug def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True, max_length=38,padding="max_length") labels = [] for i, label in enumerate(examples[f"labels"]): word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. previous_word_idx = None label_ids = [] for word_idx in word_ids: # Set the special tokens to -100. if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: # Only label the first token of a given word. label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") dt = dataset.map(tokenize_and_align_labels, batched=True) ## Expected results New dataset objects should load and do on older versions. ## Actual results "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features ## Environment info everything works fine on older installations of datasets/transformers Issue arises when installing datasets on google collab under python3.7 I can't manage to find the exact output you're requirering but version printed is datasets-2.3.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4508/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4507/comments
https://api.github.com/repos/huggingface/datasets/issues/4507/events
https://github.com/huggingface/datasets/issues/4507
1,272,615,932
I_kwDODunzps5L2pP8
4,507
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.", "@albertvillanova Thanks! I can't believe I didn't know this feature till now." ]
1,655,319,394,000
1,655,376,008,000
1,655,376,008,000
NONE
null
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair. Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`? Many thanks for any help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4507/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4506/comments
https://api.github.com/repos/huggingface/datasets/issues/4506/events
https://github.com/huggingface/datasets/issues/4506
1,272,516,895
I_kwDODunzps5L2REf
4,506
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
{ "login": "DrMatters", "id": 22641583, "node_id": "MDQ6VXNlcjIyNjQxNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrMatters", "html_url": "https://github.com/DrMatters", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "repos_url": "https://api.github.com/users/DrMatters/repos", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`", "@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake", "Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```", "installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment", "This has been fixed in https://github.com/huggingface/datasets/pull/4516, we will do a new release soon to include the fix :)" ]
1,655,313,091,000
1,656,422,629,000
1,656,422,585,000
NONE
null
## Describe the bug Sometimes I get messages about not being able to hash a method: `Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset. _map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` Whilst the function looks like this: ```python @staticmethod def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example): speaker_id, dialogue = tuple(zip(*(example["dialogue"]))) example["speaker_id"] = speaker_id example["dialogue"] = dialogue return example ``` This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step. This error is sometimes causing a failure to use cached data, instead of re-running all steps again. ## Steps to reproduce the bug ```python import copy import datasets from datasets import arrow_dataset def main(): dataset = datasets.load_dataset("blended_skill_talk") res = dataset.map(method) print(res) def method(example: arrow_dataset.Example): example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance']) return example if __name__ == '__main__': main() ``` Run with: ``` python -m reproduce_error ``` ## Expected results Dataset is mapped and cached correctly. ## Actual results The code outputs this at some point: `Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.04.3 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Datasets version: 2.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4506/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4505/comments
https://api.github.com/repos/huggingface/datasets/issues/4505/events
https://github.com/huggingface/datasets/pull/4505
1,272,477,226
PR_kwDODunzps45uH-o
4,505
Fix double dots in data files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)" ]
1,655,310,664,000
1,655,313,358,000
1,655,312,753,000
MEMBER
null
As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot) I fixed this and added a test cc @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4505/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4505/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4505", "html_url": "https://github.com/huggingface/datasets/pull/4505", "diff_url": "https://github.com/huggingface/datasets/pull/4505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4505.patch", "merged_at": 1655312753000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4504/comments
https://api.github.com/repos/huggingface/datasets/issues/4504/events
https://github.com/huggingface/datasets/issues/4504
1,272,418,480
I_kwDODunzps5L15Cw
4,504
Can you please add the Stanford dog dataset?
{ "login": "dgrnd4", "id": 69434832, "node_id": "MDQ6VXNlcjY5NDM0ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/69434832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dgrnd4", "html_url": "https://github.com/dgrnd4", "followers_url": "https://api.github.com/users/dgrnd4/followers", "following_url": "https://api.github.com/users/dgrnd4/following{/other_user}", "gists_url": "https://api.github.com/users/dgrnd4/gists{/gist_id}", "starred_url": "https://api.github.com/users/dgrnd4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dgrnd4/subscriptions", "organizations_url": "https://api.github.com/users/dgrnd4/orgs", "repos_url": "https://api.github.com/users/dgrnd4/repos", "events_url": "https://api.github.com/users/dgrnd4/events{/privacy}", "received_events_url": "https://api.github.com/users/dgrnd4/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
{ "login": "khushmeeet", "id": 8711912, "node_id": "MDQ6VXNlcjg3MTE5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khushmeeet", "html_url": "https://github.com/khushmeeet", "followers_url": "https://api.github.com/users/khushmeeet/followers", "following_url": "https://api.github.com/users/khushmeeet/following{/other_user}", "gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}", "starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions", "organizations_url": "https://api.github.com/users/khushmeeet/orgs", "repos_url": "https://api.github.com/users/khushmeeet/repos", "events_url": "https://api.github.com/users/khushmeeet/events{/privacy}", "received_events_url": "https://api.github.com/users/khushmeeet/received_events", "type": "User", "site_admin": false }
[ { "login": "khushmeeet", "id": 8711912, "node_id": "MDQ6VXNlcjg3MTE5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khushmeeet", "html_url": "https://github.com/khushmeeet", "followers_url": "https://api.github.com/users/khushmeeet/followers", "following_url": "https://api.github.com/users/khushmeeet/following{/other_user}", "gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}", "starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions", "organizations_url": "https://api.github.com/users/khushmeeet/orgs", "repos_url": "https://api.github.com/users/khushmeeet/repos", "events_url": "https://api.github.com/users/khushmeeet/events{/privacy}", "received_events_url": "https://api.github.com/users/khushmeeet/received_events", "type": "User", "site_admin": false } ]
null
[ "would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)", "@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n", "Hi! The [ADD NEW DATASET](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) instructions are indeed the best place to start. It's also perfectly fine to add a dataset if it's public, even if it's not yours. Let me know if you need some additional pointers.", "If no one is working on this, I could take this up!", "@khushmeeet this is the [link](https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset) where I added the dataset already. If you can I would ask you to do this:\r\n1) The dataset it's all in TRAINING SET: can you please divide it in Training,Test and Validation Set? If you can for each class, take the 80% for the Training set and the 10% for Test and 10% Validation\r\n2) The images has different size, can you please resize all the images in 224,224,3? Look even at the last dimension \"3\" because some images has dimension 4!\r\n\r\nThank you!!", "Hi @khushmeeet! Thanks for the interest. You can self-assign the issue by commenting `#self-assign` on it. \r\n\r\nAlso, I think we can skip @dgrnd4's steps as we try to avoid any custom processing on top of raw data. One can later copy the script and override `_post_process` in it to perform such processing on the generated dataset.", "Thanks @mariosasko \r\n\r\n@dgrnd4 As dataset is there on Hub, and preprocessing is not recommended. I am not sure if there is any other task to do. However, I can't seem to find relevant `.py` files for this dataset in GitHub repo.", "@khushmeeet @mariosasko The point is that the images must be processed and must have the same size in order to can be used for things for example \"Training\". ", "@dgrnd4 Yes, but this can be done after loading (`map` to resize images and `train_test_split` to create extra splits)\r\n\r\n@khushmeeet The linked version is implemented as a no-code dataset and is generated directly from the ZIP archive, but our \"GitHub\" datasets (these are datasets without a user/org namespace on the Hub) need a generation script, and you can find one [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/image_classification/stanford_dogs.py). `datasets` started as a fork of TFDS, so we share similar script structure, which makes it trivial to adapt it.", "@mariosasko The point is that if I use something like this:\r\nx_train, x_test = train_test_split(dataset, test_size=0.1) \r\n\r\nto get Train 90% and Test 10%, and then to get the Validation Set (10% of the whole 100%):\r\n\r\n```\r\ntrain_ratio = 0.80\r\nvalidation_ratio = 0.10\r\ntest_ratio = 0.10\r\n\r\nx_train, x_test, y_train, y_test = train_test_split(dataX, dataY, test_size=1 - train_ratio)\r\nx_val, x_test, y_val, y_test = train_test_split(x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio)) \r\n\r\n```\r\n\r\nThe point is that the structure of the data is:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 20580\r\n })\r\n})\r\n\r\n```\r\n\r\nSo how to extract images and labels?\r\n\r\nEDIT --> Split of the dataset in Train-Test-Validation:\r\n```\r\nimport datasets\r\nfrom datasets.dataset_dict import DatasetDict\r\nfrom datasets import Dataset\r\n\r\npercentage_divison_test = int(len(dataset['train'])/100 *10) # 10% --> 2058 \r\npercentage_divison_validation = int(len(dataset['train'])/100 *20) # 20% --> 4116\r\n\r\ndataset_ = datasets.DatasetDict({\"train\": Dataset.from_dict({\r\n\r\n 'image': dataset['train'][0 : len(dataset['train']) ]['image'], \r\n 'labels': dataset['train'][0 : len(dataset['train']) ]['label'] }), \r\n \r\n \"test\": Dataset.from_dict({ #20580-4116 (validation) ,20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['label'] }), \r\n \r\n \"validation\": Dataset.from_dict({ # 20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['label'] }), \r\n })\r\n```", "@mariosasko in order to resize images I'm trying this method: \r\n```\r\nfor i in range(0,len(dataset['train'])): #len(dataset['train'])\r\n\r\n ex = dataset['train'][i] #i\r\n image = ex['image']\r\n image = image.convert(\"RGB\") # <class 'PIL.Image.Image'> <PIL.Image.Image image mode=RGB size=500x333 at 0x7F84F1948150>\r\n image_resized = image.resize(size_to_resize) # <PIL.Image.Image image mode=RGB size=224x224 at 0x7F84F17885D0>\r\n\r\n dataset['train'][i]['image'] = image_resized \r\n```\r\n\r\nBecause the DatasetDict is formed by arrows that are immutable, the changing assignment in the last line of code, doesn't work!\r\nDo you have any idea in order to get a valid result?", "#self-assign", "I have raised PR for adding stanford-dog dataset. I have not added any data preprocessing code. Only dataset generation script is there. Let me know any changes required, or anything to add to README." ]
1,655,307,575,000
1,657,342,067,000
null
NONE
null
## Adding a Dataset - **Name:** *Stanford dog dataset* - **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)* - **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4504/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4503/comments
https://api.github.com/repos/huggingface/datasets/issues/4503/events
https://github.com/huggingface/datasets/pull/4503
1,272,367,055
PR_kwDODunzps45twLR
4,503
Refactor and add metadata to fever dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "But this is somehow fever v3 dataset (see this link https://fever.ai/ under the dropdown menu called Datasets). Our fever dataset already contains v1 and v2 configs. Then, I added this as if v3 config (but named feverous instead of v3 to align with the original naming by data owners).", "In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever/feverous\".", "> In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever/feverous\".\r\n\r\nYea makes sense ! thanks :) let's push more datasets on the hub rather than on github from now on", "I have added \"feverous\" dataset to the Hub: https://huggingface.co/datasets/fever/feverous\r\n\r\nI change the name of this PR accordingly, as now it only:\r\n- Refactors code and include for both Fever v1.0 and v2.0 specific:\r\n - Descriptions\r\n - Citations\r\n - Homepages\r\n- Updates documentation card aligned with above:\r\n - It was missing v2.0 description and citation.\r\n- Update metadata JSON" ]
1,655,305,187,000
1,657,108,455,000
1,657,107,690,000
MEMBER
null
Related to: #4452 and #3792.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4503/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4503", "html_url": "https://github.com/huggingface/datasets/pull/4503", "diff_url": "https://github.com/huggingface/datasets/pull/4503.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4503.patch", "merged_at": 1657107690000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4502/comments
https://api.github.com/repos/huggingface/datasets/issues/4502/events
https://github.com/huggingface/datasets/issues/4502
1,272,353,700
I_kwDODunzps5L1pOk
4,502
Logic bug in arrow_writer?
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.", "Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.", "> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.", "Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.", "Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```", "Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.", "> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`", "Great thanks for the response! So I'll just add that regression test and remove the current if-statement.", "Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages/datasets/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```", "> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema" ]
1,655,304,600,000
1,655,565,351,000
1,655,565,351,000
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488 I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows: ``` - if batch_examples and len(next(iter(batch_examples.values()))) == 0: + if not batch_examples or len(next(iter(batch_examples.values()))) == 0: return ``` @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4502/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4501/comments
https://api.github.com/repos/huggingface/datasets/issues/4501/events
https://github.com/huggingface/datasets/pull/4501
1,272,300,646
PR_kwDODunzps45th2M
4,501
Corrected broken links in doc
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,302,337,000
1,655,305,865,000
1,655,305,256,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4501/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4501/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4501", "html_url": "https://github.com/huggingface/datasets/pull/4501", "diff_url": "https://github.com/huggingface/datasets/pull/4501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4501.patch", "merged_at": 1655305256000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4500/comments
https://api.github.com/repos/huggingface/datasets/issues/4500/events
https://github.com/huggingface/datasets/pull/4500
1,272,281,992
PR_kwDODunzps45tdxk
4,500
Add `concatenate_datasets` for iterable datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! I addressed your comments :)\r\n\r\n> There is a slight difference in concatenate_datasets between the version for map-style datasets and the one for iterable datasets\r\n\r\nIndeed, here is what I did to fix this:\r\n\r\n- axis 0: fill missing columns with None.\r\n(I first iterate over the input datasets to infer their columns from the first examples, then I set the features of the resulting dataset to be the merged features)\r\nThis is consistent with non-streaming concatenation\r\n\r\n- axis 1: **fill the missing rows with None**, for consistency with axis 0\r\n(but let me know what you think, I can still revert this behavior and raise an error when one of the dataset runs out of examples)\r\nWe might have to align the non-streaming concatenation with this behavior though, for consistency. What do you think ?", "Added more comments as suggested, and some typing\r\n\r\nWhile factorizing _apply_features_types for both IterableDataset and TypedExamplesIterable, I fixed a missing `token_per_repo_id` that was not passed to TypedExamplesIteable\r\n\r\nLet me know what you think now @mariosasko " ]
1,655,301,530,000
1,656,451,539,000
1,656,450,904,000
MEMBER
null
`concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets` Fix https://github.com/huggingface/datasets/issues/2564 I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals And I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets` (though it's also copied in arrow_dataset module for backward compatibility for now)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4500/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4500", "html_url": "https://github.com/huggingface/datasets/pull/4500", "diff_url": "https://github.com/huggingface/datasets/pull/4500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4500.patch", "merged_at": 1656450904000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4499/comments
https://api.github.com/repos/huggingface/datasets/issues/4499/events
https://github.com/huggingface/datasets/pull/4499
1,272,118,162
PR_kwDODunzps45s6Jh
4,499
fix ETT m1/m2 test/val dataset
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits", "ah yes!" ]
1,655,293,862,000
1,655,304,956,000
1,655,304,313,000
CONTRIBUTOR
null
https://huggingface.co/datasets/ett/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4499/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4499/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4499", "html_url": "https://github.com/huggingface/datasets/pull/4499", "diff_url": "https://github.com/huggingface/datasets/pull/4499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4499.patch", "merged_at": 1655304312000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4498/comments
https://api.github.com/repos/huggingface/datasets/issues/4498/events
https://github.com/huggingface/datasets/issues/4498
1,272,100,549
I_kwDODunzps5L0rbF
4,498
WER and CER > 1
{ "login": "sadrasabouri", "id": 43045767, "node_id": "MDQ6VXNlcjQzMDQ1NzY3", "avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadrasabouri", "html_url": "https://github.com/sadrasabouri", "followers_url": "https://api.github.com/users/sadrasabouri/followers", "following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}", "gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions", "organizations_url": "https://api.github.com/users/sadrasabouri/orgs", "repos_url": "https://api.github.com/users/sadrasabouri/repos", "events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}", "received_events_url": "https://api.github.com/users/sadrasabouri/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0" ]
1,655,292,912,000
1,655,311,085,000
1,655,311,085,000
NONE
null
## Describe the bug It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd. If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to ```python return min(incorrect / total, 1.0) ``` ## Steps to reproduce the bug ```python from datasets import load_metric wer = load_metric("wer") wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"]) print(wer_value) ``` ## Expected results ``` 1.0 ``` ## Actual results ``` 3.0 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4498/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4497/comments
https://api.github.com/repos/huggingface/datasets/issues/4497/events
https://github.com/huggingface/datasets/pull/4497
1,271,964,338
PR_kwDODunzps45sYns
4,497
Re-add download_manager module in utils
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.", "It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```", "As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it", "Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later" ]
1,655,286,273,000
1,655,289,208,000
1,655,288,624,000
MEMBER
null
https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager` This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` This PR re-adds `datasets.utils.download_manager` without circular imports. We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4497/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4497", "html_url": "https://github.com/huggingface/datasets/pull/4497", "diff_url": "https://github.com/huggingface/datasets/pull/4497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4497.patch", "merged_at": 1655288624000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4496/comments
https://api.github.com/repos/huggingface/datasets/issues/4496/events
https://github.com/huggingface/datasets/pull/4496
1,271,945,704
PR_kwDODunzps45sUnW
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!" ]
1,655,285,356,000
1,657,213,611,000
1,657,212,948,000
CONTRIBUTOR
null
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4496/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4496", "html_url": "https://github.com/huggingface/datasets/pull/4496", "diff_url": "https://github.com/huggingface/datasets/pull/4496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4496.patch", "merged_at": 1657212948000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4495/comments
https://api.github.com/repos/huggingface/datasets/issues/4495/events
https://github.com/huggingface/datasets/pull/4495
1,271,851,025
PR_kwDODunzps45sAgO
4,495
Fix patching module that doesn't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,281,070,000
1,655,311,249,000
1,655,283,249,000
MEMBER
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing Bug introduced by #4375 Fix https://github.com/huggingface/datasets/issues/4494
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4495/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4495", "html_url": "https://github.com/huggingface/datasets/pull/4495", "diff_url": "https://github.com/huggingface/datasets/pull/4495.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4495.patch", "merged_at": 1655283249000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4494/comments
https://api.github.com/repos/huggingface/datasets/issues/4494/events
https://github.com/huggingface/datasets/issues/4494
1,271,850,599
I_kwDODunzps5LzuZn
4,494
Patching fails for modules that are not installed or don't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,655,281,049,000
1,655,283,249,000
1,655,283,249,000
MEMBER
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing We use patching to extend such functions to support remote URLs and work in streaming mode
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4494/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4493/comments
https://api.github.com/repos/huggingface/datasets/issues/4493/events
https://github.com/huggingface/datasets/pull/4493
1,271,306,385
PR_kwDODunzps45qL7J
4,493
Add `@transmit_format` in `flatten`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! ๐Ÿค— ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this." ]
1,655,237,349,000
1,658,485,736,000
null
CONTRIBUTOR
null
As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated. **Edit**: according to @mariosasko comment below, the decorator `@transmit_format` doesn't handle column renaming, so it's done manually for those instead.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4493/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4493", "html_url": "https://github.com/huggingface/datasets/pull/4493", "diff_url": "https://github.com/huggingface/datasets/pull/4493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4493.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4492/comments
https://api.github.com/repos/huggingface/datasets/issues/4492/events
https://github.com/huggingface/datasets/pull/4492
1,271,112,497
PR_kwDODunzps45pktu
4,492
Pin the revision in imagenet download links
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,226,917,000
1,655,228,113,000
1,655,227,545,000
MEMBER
null
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4492/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4492", "html_url": "https://github.com/huggingface/datasets/pull/4492", "diff_url": "https://github.com/huggingface/datasets/pull/4492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4492.patch", "merged_at": 1655227545000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4491/comments
https://api.github.com/repos/huggingface/datasets/issues/4491/events
https://github.com/huggingface/datasets/issues/4491
1,270,803,822
I_kwDODunzps5Lvu1u
4,491
Dataset Viewer issue for Pavithree/test
{ "login": "Pavithree", "id": 23344465, "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pavithree", "html_url": "https://github.com/Pavithree", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "repos_url": "https://api.github.com/users/Pavithree/repos", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset." ]
1,655,212,990,000
1,655,217,441,000
1,655,217,273,000
NONE
null
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4491/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4490/comments
https://api.github.com/repos/huggingface/datasets/issues/4490/events
https://github.com/huggingface/datasets/issues/4490
1,270,719,074
I_kwDODunzps5LvaJi
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,655,209,180,000
1,655,209,180,000
null
CONTRIBUTOR
null
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4490/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4489/comments
https://api.github.com/repos/huggingface/datasets/issues/4489/events
https://github.com/huggingface/datasets/pull/4489
1,270,706,195
PR_kwDODunzps45oONF
4,489
Add SV-Ident dataset
{ "login": "e-tornike", "id": 20404466, "node_id": "MDQ6VXNlcjIwNDA0NDY2", "avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-tornike", "html_url": "https://github.com/e-tornike", "followers_url": "https://api.github.com/users/e-tornike/followers", "following_url": "https://api.github.com/users/e-tornike/following{/other_user}", "gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}", "starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions", "organizations_url": "https://api.github.com/users/e-tornike/orgs", "repos_url": "https://api.github.com/users/e-tornike/repos", "events_url": "https://api.github.com/users/e-tornike/events{/privacy}", "received_events_url": "https://api.github.com/users/e-tornike/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @e-tornike, thanks a lot for adding this interesting dataset.\r\n\r\nRecently at Hugging Face, we have decided to give priority to adding datasets directly on the Hub. Would you mind to transfer your loading script to the Hub? You could create a dedicated org namespace, so that your dataset would be accessible using `vadis/sv_ident` or `sdproc/sv_ident` or `coling/sv_ident` (as you prefer).\r\n\r\nYou have an example here: https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus", "Additionally, please feel free to ping us if you need assistance/help in creating this dataset.\r\n\r\nYou could put the link to your Hub dataset here in this Issue discussion page, so that we can follow the progress. :)", "Hi @albertvillanova, thanks for the feedback! Uploading via the Hub is a lot easier! \r\n\r\nI've uploaded the dataset here: https://huggingface.co/datasets/vadis/sv-ident, but it's reporting a \"Status400Error\". Is there any way to see the logs of the dataset script and what is causing the error?", "Hi @e-tornike, good job at https://huggingface.co/datasets/vadis/sv-ident.\r\n\r\nNormally, you can run locally the loading of the dataset by passing `streaming=True` (as the previewer does):\r\n```python\r\nds = load_dataset(\"path/to/sv_ident.py, split=\"train\", streaming=True)\r\nitem = next(iter(ds))\r\nitem\r\n```\r\n\r\nLet me have a look and open a discussion on your Hub repo! ;)", "I've opened an Issue: \r\n- #4527 " ]
1,655,208,540,000
1,655,714,906,000
1,655,714,247,000
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4489/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4489", "html_url": "https://github.com/huggingface/datasets/pull/4489", "diff_url": "https://github.com/huggingface/datasets/pull/4489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4489.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4488/comments
https://api.github.com/repos/huggingface/datasets/issues/4488/events
https://github.com/huggingface/datasets/pull/4488
1,270,613,857
PR_kwDODunzps45n6Ja
4,488
Update PASS dataset version
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,203,634,000
1,655,224,915,000
1,655,224,348,000
CONTRIBUTOR
null
Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4488/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4488", "html_url": "https://github.com/huggingface/datasets/pull/4488", "diff_url": "https://github.com/huggingface/datasets/pull/4488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4488.patch", "merged_at": 1655224348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4487/comments
https://api.github.com/repos/huggingface/datasets/issues/4487/events
https://github.com/huggingface/datasets/pull/4487
1,270,525,163
PR_kwDODunzps45nm5J
4,487
Support streaming UDHR dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,199,213,000
1,655,269,762,000
1,655,269,189,000
MEMBER
null
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4487/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4487", "html_url": "https://github.com/huggingface/datasets/pull/4487", "diff_url": "https://github.com/huggingface/datasets/pull/4487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4487.patch", "merged_at": 1655269189000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4486/comments
https://api.github.com/repos/huggingface/datasets/issues/4486/events
https://github.com/huggingface/datasets/pull/4486
1,269,518,084
PR_kwDODunzps45kP88
4,486
Add CCAgT dataset
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi! Excellent job @johnnv1! There were typos/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?", "I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?", "Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation", "I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)", "Ok, I can do that in the next few days. I will create a `lapix` organization, and I will add this dataset and also #4565", "So, I think we can close this PR? I have already moved these files there.\r\n\r\nThe link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT\r\n\r\n๐Ÿค— ", "Woohoo awesome !\r\n\r\nclosing this PR :)" ]
1,655,130,019,000
1,656,945,423,000
1,656,944,745,000
NONE
null
As described in #4075 I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4486/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4486", "html_url": "https://github.com/huggingface/datasets/pull/4486", "diff_url": "https://github.com/huggingface/datasets/pull/4486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4486.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4485/comments
https://api.github.com/repos/huggingface/datasets/issues/4485/events
https://github.com/huggingface/datasets/pull/4485
1,269,463,054
PR_kwDODunzps45kD7A
4,485
Fix cast to null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,127,872,000
1,655,214,234,000
1,655,213,654,000
MEMBER
null
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type. Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type). Fix https://github.com/huggingface/datasets/issues/4483
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4485/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4485", "html_url": "https://github.com/huggingface/datasets/pull/4485", "diff_url": "https://github.com/huggingface/datasets/pull/4485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4485.patch", "merged_at": 1655213654000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4484/comments
https://api.github.com/repos/huggingface/datasets/issues/4484/events
https://github.com/huggingface/datasets/pull/4484
1,269,383,811
PR_kwDODunzps45jywZ
4,484
Better ImportError message when a dataset script dependency is missing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Discussed offline with @mariosasko, merging :)", "Fwiw, i think this same issue is occurring on the datasets website page, where preview isn't available due to the `bigbench` import error", "For the preview of BigBench datasets, we're just waiting for bigbench to have a stable version on PyPI, instead of the one hosted on GCS ;)" ]
1,655,124,277,000
1,657,290,644,000
1,655,128,247,000
MEMBER
null
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ``` to ``` ImportError: To be able to use bigbench, you need to install the following dependency: bigbench. Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4484/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4484", "html_url": "https://github.com/huggingface/datasets/pull/4484", "diff_url": "https://github.com/huggingface/datasets/pull/4484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4484.patch", "merged_at": 1655128247000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4483/comments
https://api.github.com/repos/huggingface/datasets/issues/4483/events
https://github.com/huggingface/datasets/issues/4483
1,269,253,840
I_kwDODunzps5Lp0bQ
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\n# explicitly say that the \"label\" type is int64, even though it contains only null values\r\nds = ds.cast_column(\"label\", Sequence(Value(\"int64\")))\r\n\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\n\r\nds_mapped = ds.map(mapper,batched=True)\r\n```" ]
1,655,117,272,000
1,655,213,654,000
1,655,213,654,000
NONE
null
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything) Particularly the fact that this only happens in batched mode is strange. ## Steps to reproduce the bug ```python import numpy as np ds = Dataset.from_dict( { "text": ["the lazy dog jumps over the quick fox", "another sentence"], "label": [[], []], } ) def mapper(features): features['label'] = [ [0,0,0] for l in features['label'] ] return features ds_mapped = ds.map(mapper,batched=True) ``` ## Expected results Not crashing ## Actual results ``` ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map return self._map_single( ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper out = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single writer.write_batch(batch) ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch arrays.append(pa.array(typed_sequence)) pyarrow/array.pxi:230: in pyarrow.lib.array ??? pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol ??? ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature casted_values = _c(array.values, feature.feature) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast return array.cast(pa_type) pyarrow/array.pxi:915: in pyarrow.lib.Array.cast ??? ../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast return call_function("cast", [arr], options) pyarrow/_compute.pyx:542: in pyarrow._compute.call_function ??? pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call ??? pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null pyarrow/error.pxi:121: ArrowNotImplementedError ``` ## Workarounds * Not using batched=True * Using an np.array([],dtype=float) or similar instead of [] in the input * Naming the output column differently from the input column ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu - Python version: 3.8 - PyArrow version: 8.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4483/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4482/comments
https://api.github.com/repos/huggingface/datasets/issues/4482/events
https://github.com/huggingface/datasets/pull/4482
1,269,237,447
PR_kwDODunzps45jS_c
4,482
Test that TensorFlow is not imported on startup
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4482). All of your documentation changes will be reflected on that endpoint." ]
1,655,116,429,000
1,657,120,793,000
null
MEMBER
null
TF takes some time to be imported, and also uses some GPU memory. I just added a test to make sure that in the future it's never imported by default when ```python import datasets ``` is called. Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch) I'll mark this PR as ready for review once `huggingface_hub` has a new release
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4482/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4482", "html_url": "https://github.com/huggingface/datasets/pull/4482", "diff_url": "https://github.com/huggingface/datasets/pull/4482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4482.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4481/comments
https://api.github.com/repos/huggingface/datasets/issues/4481/events
https://github.com/huggingface/datasets/pull/4481
1,269,187,792
PR_kwDODunzps45jIRi
4,481
Fix iwslt2017
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI fails are just abut missing tags in the dataset card, merging !" ]
1,655,113,881,000
1,655,117,397,000
1,655,116,818,000
MEMBER
null
The files were moved to google drive, I hosted them on the Hub instead (ok according to the license) I also updated the `datasets_infos.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4481/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4481", "html_url": "https://github.com/huggingface/datasets/pull/4481", "diff_url": "https://github.com/huggingface/datasets/pull/4481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4481.patch", "merged_at": 1655116818000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4480/comments
https://api.github.com/repos/huggingface/datasets/issues/4480/events
https://github.com/huggingface/datasets/issues/4480
1,268,921,567
I_kwDODunzps5LojTf
4,480
Bigbench tensorflow GPU dependency
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https://github.com/google/BIG-bench) as well regarding the `AttributeError`", "I'm on vacation for the next week, so won't be able to do much debugging at the moment. Sorry for the inconvenience.\r\nBut I did quickly take a look:\r\n\r\n**pypi**:\r\nI managed to reproduce the above error with the pypi version begin out of date. \r\nThe version on `https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` should be up to date, but it was my understanding that there was some issue with the pypi upload, so I don't even understand why there is a version [on pypi from April 1](https://pypi.org/project/bigbench/0.0.1/). Perhaps @ethansdyer, who's handling the pypi upload, knows the answer to that?\r\n\r\n**OOM error**:\r\nBut, I'm unable to reproduce the OOM error in a google colab with GPU enabled.\r\nThis is what I ran:\r\n```\r\n!pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n!pip install datasets\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"bigbench\",\"swedish_to_german_proverbs\")\r\n``` \r\nThe `swedish_to_german_proverbs`task is only 72 examples, so I don't understand what could be causing the OOM error. Loading the task has no effect on the RAM for me. @cceyda Can you confirm that this does not occur in a [colab](https://colab.research.google.com/)?\r\nIf the GPU is somehow causing issues on your system, disabling the GPU from TF might be an option too\r\n```\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "Solved.\r\nYes it works on colab, and somehow magically on my machine too now. hmm not sure what was wrong before I had used a fresh venv both times with just the dataloading code, and tried multiple times. (maybe just a wrong tensorflow version got mixed up somehow) The tensorflow call seems to come from the bigbench side anyway.\r\n\r\nabout bigbench pypi version update, I opened an issue over there https://github.com/google/BIG-bench/issues/846\r\n\r\nanyway closing this now. If anyone else has the same problem can re-open." ]
1,655,097,846,000
1,655,235,924,000
1,655,235,923,000
CONTRIBUTOR
null
## Describe the bug Loading bigbech ```py from datasets import load_dataset dataset = load_dataset("bigbench","swedish_to_german_proverbs") ``` tries to use gpu and fails with OOM with the following error ``` Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0... Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400 Aborted (core dumped) ``` I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default. `pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` while just doing 'pip install bigbench' results in following error ``` File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module> class Bigbench(datasets.GeneratorBasedBuilder): File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names() AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names' ``` ## Steps to avoid the bug Not ideal but can solve with (since I don't really use tensorflow elsewhere) `pip uninstall tensorflow` `pip install tensorflow-cpu` ## Environment info - datasets @ master - Python version: 3.7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4480/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4479/comments
https://api.github.com/repos/huggingface/datasets/issues/4479/events
https://github.com/huggingface/datasets/pull/4479
1,268,558,237
PR_kwDODunzps45hHtZ
4,479
Include entity positions as feature in ReCoRD
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4479). All of your documentation changes will be reflected on that endpoint." ]
1,655,034,988,000
1,659,703,513,000
null
CONTRIBUTOR
null
https://huggingface.co/datasets/super_glue/viewer/record/validation TLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD. Currently, the loading script ignores the entity positions ("entity_start", "entity_end") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \"\@ placeholder\" in query with each entity individually. But it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in the passage, and thus makes one training instance from one data point. It is way more efficient and proved effective for the ReCoRD task. Can anybody help me with the dataset card rendering error? Maybe @lhoestq ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4479/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4479", "html_url": "https://github.com/huggingface/datasets/pull/4479", "diff_url": "https://github.com/huggingface/datasets/pull/4479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4479.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4478/comments
https://api.github.com/repos/huggingface/datasets/issues/4478/events
https://github.com/huggingface/datasets/issues/4478
1,268,358,213
I_kwDODunzps5LmZxF
4,478
Dataset slow during model training
{ "login": "lehrig", "id": 9555494, "node_id": "MDQ6VXNlcjk1NTU0OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9555494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lehrig", "html_url": "https://github.com/lehrig", "followers_url": "https://api.github.com/users/lehrig/followers", "following_url": "https://api.github.com/users/lehrig/following{/other_user}", "gists_url": "https://api.github.com/users/lehrig/gists{/gist_id}", "starred_url": "https://api.github.com/users/lehrig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lehrig/subscriptions", "organizations_url": "https://api.github.com/users/lehrig/orgs", "repos_url": "https://api.github.com/users/lehrig/repos", "events_url": "https://api.github.com/users/lehrig/events{/privacy}", "received_events_url": "https://api.github.com/users/lehrig/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! cc @Rocketknight1 maybe you know better ?\r\n\r\nI'm not too familiar with `tf.data.experimental.save`. Note that `datasets` uses memory mapping, so depending on your hardware and the disk you are using you can expect performance differences with a dataset loaded in RAM", "Hi @lehrig, I suspect what's happening here is that our `to_tf_dataset()` method has some performance issues when streaming samples. This is usually not a problem, but they become apparent when streaming a vision dataset into a very small vision model, which will need a lot of sample throughput to saturate the GPU.\r\n\r\nWhen you save a `tf.data.Dataset` with `tf.data.experimental.save`, all of the samples from the dataset (which are, in this case, batches of images), are saved to disk. When you load this saved dataset, you're effectively bypassing `to_tf_dataset()` entirely, which alleviates this performance bottleneck.\r\n\r\n`to_tf_dataset()` is something we're actively working on overhauling right now - particularly for image datasets, we want to make it possible to access the underlying images with `tf.data` without going through the current layer of indirection with `Arrow`, which should massively improve simplicity and performance. \r\n\r\nHowever, if you just want this to work quickly but without needing your save/load hack, my advice would be to simply load the dataset into memory if it's small enough to fit. Since all your samples have the same dimensions, you can do this simply with:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ndataset = dataset.with_format(\"numpy\")\r\ndata_in_memory = dataset[:]\r\n```\r\n\r\nThen you can simply do something like:\r\n\r\n```\r\nmodel.fit(data_in_memory[\"pixel_values\"], data_in_memory[\"labels\"])\r\n```", "Thanks for the information! \r\n\r\nI have now updated the training code like so:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ntrain_dataset = dataset[\"train\"][:]\r\nvalidation_dataset = dataset[\"dev\"][:]\r\n\r\n...\r\n\r\nmodel.fit(\r\n train_dataset[\"pixel_values\"],\r\n train_dataset[\"label\"],\r\n epochs=epochs,\r\n validation_data=(\r\n validation_dataset[\"pixel_values\"],\r\n validation_dataset[\"label\"]\r\n ),\r\n callbacks=[earlyStopping, mcp_save, reduce_lr_loss]\r\n)\r\n```\r\n\r\n- Creating the in-memory dataset is quite quick\r\n- But: There is now a long wait (~4-5 Minutes) before the training starts (why?)\r\n- And: Training times have improved but the very first epoch leaves me wondering why it takes so long (why?)\r\n\r\n**Epoch Breakdown:**\r\n- Epoch 1/10\r\n78s 12s/step - loss: 3.1307 - accuracy: 0.0737 - val_loss: 2.2827 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 2/10\r\n1s 168ms/step - loss: 2.3616 - accuracy: 0.2350 - val_loss: 2.2679 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 3/10\r\n1s 189ms/step - loss: 2.0221 - accuracy: 0.3180 - val_loss: 2.2670 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 4/10\r\n0s 67ms/step - loss: 1.8895 - accuracy: 0.3548 - val_loss: 2.2771 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 5/10\r\n0s 67ms/step - loss: 1.7846 - accuracy: 0.3963 - val_loss: 2.2860 - val_accuracy: 0.1455 - lr: 0.0010\r\n- Epoch 6/10\r\n0s 65ms/step - loss: 1.5946 - accuracy: 0.4516 - val_loss: 2.2938 - val_accuracy: 0.1636 - lr: 0.0010\r\n- Epoch 7/10\r\n0s 63ms/step - loss: 1.4217 - accuracy: 0.5115 - val_loss: 2.2968 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 8/10\r\n0s 67ms/step - loss: 1.3089 - accuracy: 0.5438 - val_loss: 2.2842 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 9/10\r\n1s 184ms/step - loss: 1.2480 - accuracy: 0.5806 - val_loss: 2.2652 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 10/10\r\n0s 65ms/step - loss: 1.2699 - accuracy: 0.5622 - val_loss: 2.2670 - val_accuracy: 0.2000 - lr: 0.0010\r\n\r\n", "Regarding the new long ~5 min. wait introduced by the in-memory dataset update: this might be causing it? https://datascience.stackexchange.com/questions/33364/why-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the\r\n\r\nFor now, my save/load hack is still more performant, even though having more boiler-plate code :/ ", "That 5 minute wait is quite surprising! I don't have a good explanation for why it's happening, but it can't be an issue with `datasets` or `tf.data` because you're just fitting directly on Numpy arrays at this point. All I can suggest is seeing if you can isolate the issue - for example, does fitting on a smaller dataset containing only 10% of the original data reduce the wait? This might indicate the delay is caused by your data being copied or converted somehow. Alternatively, you could try removing things like callbacks and seeing if you could isolate the issue there." ]
1,654,976,419,000
1,655,208,271,000
null
NONE
null
## Describe the bug While migrating towards ๐Ÿค— Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training. First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it. Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with ๐Ÿค— Datasets. Any idea what's the reason for this and how to speed-up training with ๐Ÿค— Datasets? ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset import os dataset_dir = "./dataset" prep_dataset_dir = "./prepdataset" model_dir = "./model" # Load Data dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized") def read_image_file(example): with open(example["image"].filename, "rb") as f: example["image"] = {"bytes": f.read()} return example dataset = dataset.map(read_image_file) dataset.save_to_disk(dataset_dir) # Preprocess from datasets import ( Array3D, DatasetDict, Features, load_from_disk, Sequence, Value ) import numpy as np from transformers import ImageFeatureExtractionMixin dataset = load_from_disk(dataset_dir) num_classes = dataset["train"].features["label"].num_classes one_hot_matrix = np.eye(num_classes) feature_extractor = ImageFeatureExtractionMixin() def to_pixels(image): image = feature_extractor.resize(image, size=size) image = feature_extractor.to_numpy_array(image, channel_first=False) image = image / 255.0 return image def process(examples): examples["pixel_values"] = [ to_pixels(image) for image in examples["image"] ] examples["label"] = [ one_hot_matrix[label] for label in examples["label"] ] return examples features = Features({ "pixel_values": Array3D(dtype="float32", shape=(size, size, 3)), "label": Sequence(feature=Value(dtype="int32"), length=num_classes) }) prep_dataset = dataset.map( process, remove_columns=["image"], batched=True, batch_size=batch_size, num_proc=2, features=features, ) prep_dataset = prep_dataset.with_format("numpy") # Split train_dev_dataset = prep_dataset['test'].train_test_split( test_size=test_size, shuffle=True, seed=seed ) train_dev_test_dataset = DatasetDict({ 'train': train_dev_dataset['train'], 'dev': train_dev_dataset['test'], 'test': prep_dataset['test'], }) train_dev_test_dataset.save_to_disk(prep_dataset_dir) # Train Model import datetime import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping from transformers import DefaultDataCollator dataset = load_from_disk(prep_data_dir) data_collator = DefaultDataCollator(return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) validation_dataset = dataset["dev"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) print(f'{datetime.datetime.now()} - Saving Data') tf.data.experimental.save(train_dataset, model_dir+"/train") tf.data.experimental.save(validation_dataset, model_dir+"/val") print(f'{datetime.datetime.now()} - Loading Data') train_dataset = tf.data.experimental.load(model_dir+"/train") validation_dataset = tf.data.experimental.load(model_dir+"/val") shape = np.shape(dataset["train"][0]["pixel_values"]) backbone = InceptionV3( include_top=False, weights='imagenet', input_shape=shape ) for layer in backbone.layers: layer.trainable = False model = Sequential() model.add(backbone) model.add(GlobalAveragePooling2D()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(64, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(10, activation='softmax')) model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'] ) print(model.summary()) earlyStopping = EarlyStopping( monitor='val_loss', patience=10, verbose=0, mode='min' ) mcp_save = ModelCheckpoint( f'{model_dir}/best_model.hdf5', save_best_only=True, monitor='val_loss', mode='min' ) reduce_lr_loss = ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=7, verbose=1, min_delta=0.0001, mode='min' ) hist = model.fit( train_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=[earlyStopping, mcp_save, reduce_lr_loss] ) ``` ## Expected results Same performance when training without my "save/load hack" or a good explanation/recommendation about the issue. ## Actual results Performance slower without my "save/load hack". **Epoch Breakdown (without my "save/load hack"):** - Epoch 1/10 41s 2s/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010 - Epoch 2/10 32s 2s/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010 - Epoch 3/10 36s 3s/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010 - Epoch 4/10 36s 3s/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010 - Epoch 5/10 32s 2s/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 6/10 42s 3s/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 7/10 32s 2s/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 8/10 32s 2s/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 9/10 loss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 10/10 32s 2s/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010 **Epoch Breakdown (with my "save/load hack"):** - Epoch 1/10 13s 625ms/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010 - Epoch 2/10 0s 80ms/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 3/10 0s 77ms/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 4/10 1s 98ms/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 5/10 1s 204ms/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 6/10 1s 210ms/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 7/10 1s 205ms/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 8/10 1s 207ms/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 9/10 1s 214ms/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 10/10 1s 207ms/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010 ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 - TensorFlow: 2.8.0 - GPU (used during training): Tesla V100-SXM2-32GB
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4478/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4477/comments
https://api.github.com/repos/huggingface/datasets/issues/4477/events
https://github.com/huggingface/datasets/issues/4477
1,268,308,986
I_kwDODunzps5LmNv6
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
{ "login": "AshTayade", "id": 42551754, "node_id": "MDQ6VXNlcjQyNTUxNzU0", "avatar_url": "https://avatars.githubusercontent.com/u/42551754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AshTayade", "html_url": "https://github.com/AshTayade", "followers_url": "https://api.github.com/users/AshTayade/followers", "following_url": "https://api.github.com/users/AshTayade/following{/other_user}", "gists_url": "https://api.github.com/users/AshTayade/gists{/gist_id}", "starred_url": "https://api.github.com/users/AshTayade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AshTayade/subscriptions", "organizations_url": "https://api.github.com/users/AshTayade/orgs", "repos_url": "https://api.github.com/users/AshTayade/repos", "events_url": "https://api.github.com/users/AshTayade/events{/privacy}", "received_events_url": "https://api.github.com/users/AshTayade/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ", "Apparently it finds `scoring-scripts/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc." ]
1,654,962,557,000
1,658,149,653,000
1,658,149,653,000
NONE
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4477/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4476/comments
https://api.github.com/repos/huggingface/datasets/issues/4476/events
https://github.com/huggingface/datasets/issues/4476
1,267,987,499
I_kwDODunzps5Lk_Qr
4,476
`to_pandas` doesn't take into account format.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`", "Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.", "Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```", "Ahhhh Thank you!\r\n\r\nclosing then :)" ]
1,654,892,731,000
1,655,314,901,000
1,655,314,901,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solution you'd like** ```python from datasets import Dataset ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]}) pandas_df = ds.with_format(columns=['a', 'b']).to_pandas() # I would expect `pandas_df` to only include a,b as column. ``` **Describe alternatives you've considered** I could remove all columns that I don't want? But I don't know all of them in advance. **Additional context** I can probably make a PR with some pointers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4476/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4475/comments
https://api.github.com/repos/huggingface/datasets/issues/4475/events
https://github.com/huggingface/datasets/pull/4475
1,267,798,451
PR_kwDODunzps45eufw
4,475
Improve error message for missing pacakges from inside dataset script
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer", "Closing in favor of #4484. " ]
1,654,880,376,000
1,655,126,787,000
1,655,126,203,000
CONTRIBUTOR
null
Improve the error message for missing packages from inside a dataset script: With this change, the error message for missing packages for `bigbench` looks as follows: ``` ImportError: To be able to use bigbench, you need to install the following dependencies: - 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' ``` And this is how it looked before: ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4475/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4475/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4475", "html_url": "https://github.com/huggingface/datasets/pull/4475", "diff_url": "https://github.com/huggingface/datasets/pull/4475.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4475.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4474/comments
https://api.github.com/repos/huggingface/datasets/issues/4474/events
https://github.com/huggingface/datasets/pull/4474
1,267,767,541
PR_kwDODunzps45en98
4,474
[Docs] How to use with PyTorch page
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,878,349,000
1,655,217,632,000
1,655,215,473,000
MEMBER
null
Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :) cc @Rocketknight1 we can try to align both documentations contents now I think cc @stevhliu let me know what you think !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4474/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4474/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4474", "html_url": "https://github.com/huggingface/datasets/pull/4474", "diff_url": "https://github.com/huggingface/datasets/pull/4474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4474.patch", "merged_at": 1655215472000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4473/comments
https://api.github.com/repos/huggingface/datasets/issues/4473/events
https://github.com/huggingface/datasets/pull/4473
1,267,555,994
PR_kwDODunzps45d5-R
4,473
Add SST-2 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "on the hub this dataset is referenced as `sst-2` not `sst2` โ€“ is there a canonical orthography? If not, could we name it `sst-2`?", "@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2", "Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already โ€“ but they're less popular: https://huggingface.co/models?datasets=sst2)", "OK, I'm taking care of the subsequent PRs on models to align with this dataset name." ]
1,654,868,246,000
1,655,129,494,000
1,655,128,869,000
MEMBER
null
Add SST-2 dataset. Currently it is part of GLUE benchmark. This PR adds it as a standalone dataset. CC: @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4473/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4473", "html_url": "https://github.com/huggingface/datasets/pull/4473", "diff_url": "https://github.com/huggingface/datasets/pull/4473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4473.patch", "merged_at": 1655128869000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4472/comments
https://api.github.com/repos/huggingface/datasets/issues/4472/events
https://github.com/huggingface/datasets/pull/4472
1,267,488,523
PR_kwDODunzps45drcb
4,472
Fix 401 error for unauthticated requests to non-existing repos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,864,691,000
1,654,866,311,000
1,654,865,757,000
MEMBER
null
The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos. This PR add support for the 401 error and fixes the CI fails on `master`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4472/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4472/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4472", "html_url": "https://github.com/huggingface/datasets/pull/4472", "diff_url": "https://github.com/huggingface/datasets/pull/4472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4472.patch", "merged_at": 1654865756000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4471/comments
https://api.github.com/repos/huggingface/datasets/issues/4471/events
https://github.com/huggingface/datasets/issues/4471
1,267,475,268
I_kwDODunzps5LjCNE
4,471
CI error with repo lhoestq/_dummy
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "fixed by https://github.com/huggingface/datasets/pull/4472" ]
1,654,863,966,000
1,654,867,493,000
1,654,867,493,000
MEMBER
null
## Describe the bug CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269 ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true ``` The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy ``` error: "Repository not found" ``` CC: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4471/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4470/comments
https://api.github.com/repos/huggingface/datasets/issues/4470/events
https://github.com/huggingface/datasets/pull/4470
1,267,470,051
PR_kwDODunzps45dnYw
4,470
Reorder returned validation/test splits in script template
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,863,673,000
1,654,884,250,000
1,654,883,690,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4470/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4470", "html_url": "https://github.com/huggingface/datasets/pull/4470", "diff_url": "https://github.com/huggingface/datasets/pull/4470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4470.patch", "merged_at": 1654883690000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4469/comments
https://api.github.com/repos/huggingface/datasets/issues/4469/events
https://github.com/huggingface/datasets/pull/4469
1,267,213,849
PR_kwDODunzps45cweQ
4,469
Replace data URLs in wider_face dataset once hosted on the Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,848,805,000
1,654,879,328,000
1,654,878,766,000
MEMBER
null
This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub. They also informed us that their dataset is licensed under CC BY-NC-ND.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4469/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4469/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4469", "html_url": "https://github.com/huggingface/datasets/pull/4469", "diff_url": "https://github.com/huggingface/datasets/pull/4469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4469.patch", "merged_at": 1654878766000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4468/comments
https://api.github.com/repos/huggingface/datasets/issues/4468/events
https://github.com/huggingface/datasets/pull/4468
1,266,715,742
PR_kwDODunzps45bERK
4,468
Generalize tutorials for audio and vision
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,812,044,000
1,655,223,722,000
1,655,223,120,000
MEMBER
null
This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset. Other changes include: - Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder. - Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library. - Renamed some tutorials in the TOC to be more clear and specific. - Added more text to nudge users towards joining the community and asking questions on the forums. - If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4468/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4468", "html_url": "https://github.com/huggingface/datasets/pull/4468", "diff_url": "https://github.com/huggingface/datasets/pull/4468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4468.patch", "merged_at": 1655223120000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4467/comments
https://api.github.com/repos/huggingface/datasets/issues/4467/events
https://github.com/huggingface/datasets/issues/4467
1,266,218,358
I_kwDODunzps5LePV2
4,467
Transcript string 'null' converted to [None] by load_dataset()
{ "login": "mbarnig", "id": 1360633, "node_id": "MDQ6VXNlcjEzNjA2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1360633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbarnig", "html_url": "https://github.com/mbarnig", "followers_url": "https://api.github.com/users/mbarnig/followers", "following_url": "https://api.github.com/users/mbarnig/following{/other_user}", "gists_url": "https://api.github.com/users/mbarnig/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbarnig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbarnig/subscriptions", "organizations_url": "https://api.github.com/users/mbarnig/orgs", "repos_url": "https://api.github.com/users/mbarnig/repos", "events_url": "https://api.github.com/users/mbarnig/events{/privacy}", "received_events_url": "https://api.github.com/users/mbarnig/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\nโ€˜โ€™, โ€˜#N/Aโ€™, โ€˜#N/A N/Aโ€™, โ€˜#NAโ€™, โ€˜-1.#INDโ€™, โ€˜-1.#QNANโ€™, โ€˜-NaNโ€™, โ€˜-nanโ€™, โ€˜1.#INDโ€™, โ€˜1.#QNANโ€™, โ€˜<NA>โ€™, โ€˜N/Aโ€™, โ€˜NAโ€™, โ€˜NULLโ€™, โ€˜NaNโ€™, โ€˜n/aโ€™, โ€˜nanโ€™, โ€˜nullโ€™.\r\n```\r\n(see \"null\" in the last position in the above list).\r\n\r\nIn order to prevent `pandas` from performing that automatic conversion from the string \"null\" to a NaN value, you should pass the `pandas` parameter `keep_default_na=False`:\r\n```python\r\nIn [2]: dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}, keep_default_na=False)\r\nIn [3]: dataset[\"train\"][0][\"transcript\"]\r\nOut[3]: 'null'\r\n```", "Thanks for the quick answer." ]
1,654,784,760,000
1,654,797,337,000
1,654,792,142,000
NONE
null
## Issue I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script `ds_train1 = mydataset.map(prepare_dataset)` the following error was issued: ``` ValueError Traceback (most recent call last) <ipython-input-69-1e8f2b37f5bc> in <module>() ----> 1 ds_train = mydataset_train.map(prepare_dataset) 11 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2450 if not _is_valid_text_input(text): 2451 raise ValueError( -> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) " 2453 "or List[List[str]] (batch of pretokenized examples)." 2454 ) ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples). ``` Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine. ## Expected result: transcription 'null' interpreted as 'str' instead of 'None'. ## Reproduction Here is the code to reproduce the error with a one-row-dataset. ``` with open("null-test.csv") as f: reader = csv.reader(f) for row in reader: print(row) ``` ['wav_filename', 'wav_filesize', 'transcript'] ['wavs/female/NULL1.wav', '17530', 'null'] ``` dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}) ``` Using custom data configuration default-81ac0c0e27af3514 Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 1/1 [00:00<00:00, 29.55it/s] Extracting data files: 100% 1/1 [00:00<00:00, 23.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 1/1 [00:00<00:00, 25.84it/s] ``` print(dataset['train']['transcript']) ``` [None] ## Environment info ``` !pip install datasets==2.2.2 !pip install transformers==4.19.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4467/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4466/comments
https://api.github.com/repos/huggingface/datasets/issues/4466/events
https://github.com/huggingface/datasets/pull/4466
1,266,159,920
PR_kwDODunzps45ZLsd
4,466
Optimize contiguous shard and select
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) // num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n<img width=\"574\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/172864881-f1db489a-2305-47f2-a07f-7d3df610b1b8.png\">\r\n", "I addressed your comments @albertvillanova , let me know what you think :)" ]
1,654,782,339,000
1,655,222,670,000
1,655,222,085,000
MEMBER
null
Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular: - the shard/select operation will be much faster - reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations Here is an example of speed-up: ```python >>> import io >>> import numpy as np >>> from datasets import Dataset >>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)}) >>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))` >>> buf = io.BytesIO() >>> %time dd.to_json(buf) Creating json from Arrow format: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100/100 [00:00<00:00, 376.17ba/s] CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms Wall time: 266 ms ``` while previously it was ```python Creating json from Arrow format: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100/100 [00:03<00:00, 29.41ba/s] CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s Wall time: 3.4 s ``` In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON. ## Implementation details I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities: - if the indices is of type `range`, it checks that start >= 0 and step = 1 - otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping. Having to iterate over the indices doesn't cause performance issues IMO because: - either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping - or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4466/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4466/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4466", "html_url": "https://github.com/huggingface/datasets/pull/4466", "diff_url": "https://github.com/huggingface/datasets/pull/4466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4466.patch", "merged_at": 1655222085000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4465/comments
https://api.github.com/repos/huggingface/datasets/issues/4465/events
https://github.com/huggingface/datasets/pull/4465
1,265,754,479
PR_kwDODunzps45X0XY
4,465
Fix bigbench config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,761,979,000
1,654,785,516,000
1,654,784,959,000
MEMBER
null
Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4465/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4465", "html_url": "https://github.com/huggingface/datasets/pull/4465", "diff_url": "https://github.com/huggingface/datasets/pull/4465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4465.patch", "merged_at": 1654784958000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4464/comments
https://api.github.com/repos/huggingface/datasets/issues/4464/events
https://github.com/huggingface/datasets/pull/4464
1,265,682,931
PR_kwDODunzps45XlWW
4,464
Extend support for streaming datasets that use xml.dom.minidom.parse
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,757,905,000
1,654,764,204,000
1,654,763,656,000
MEMBER
null
This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function. This PR adds support for streaming datasets like "Yaxin/SemEval2015". Fix #4453.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4464/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4464", "html_url": "https://github.com/huggingface/datasets/pull/4464", "diff_url": "https://github.com/huggingface/datasets/pull/4464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4464.patch", "merged_at": 1654763655000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4463/comments
https://api.github.com/repos/huggingface/datasets/issues/4463/events
https://github.com/huggingface/datasets/pull/4463
1,265,093,211
PR_kwDODunzps45Vnzu
4,463
Use config_id to check split sizes instead of config name
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "closing in favor of https://github.com/huggingface/datasets/pull/4465" ]
1,654,710,324,000
1,654,762,543,000
1,654,761,997,000
MEMBER
null
Fix https://github.com/huggingface/datasets/issues/4462
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4463/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4463", "html_url": "https://github.com/huggingface/datasets/pull/4463", "diff_url": "https://github.com/huggingface/datasets/pull/4463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4463.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4462/comments
https://api.github.com/repos/huggingface/datasets/issues/4462/events
https://github.com/huggingface/datasets/issues/4462
1,265,079,347
I_kwDODunzps5LZ5Qz
4,462
BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Why not adding `max_examples` as part of the config name?", "Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463", "Hi @lhoestq,\r\n\r\nThank you for taking a look at this issue, and proposing a solution. \r\nUnfortunately, after trying the fix in #4465 I still see the same issue.\r\n\r\nI think there is some subtlety where the config name gets overwritten somewhere when `BUILDER_CONFIGS`[(link)](https://github.com/huggingface/datasets/blob/master/datasets/bigbench/bigbench.py#L126) is defined. \r\n\r\nIf I print out the `self.config.name` in the current version (with the fix in #4465), I see just the task name, but if I comment out `BUILDER_CONFIGS`, the `num_shots` and `max_examples` gets appended as was meant by #4465.\r\n\r\nI haven't managed to track down where this happens, but I thought you might know? \r\n\r\n(Another comment on your fix: the `name` variable is used to fetch the task from the bigbench API, so modifying it causes an error if it's actually called. This can easily be fixed by having `config_name` variable in addition to the `task_name`)\r\n\r\n\r\n" ]
1,654,709,484,000
1,657,006,795,000
null
MEMBER
null
As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`. This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4462/timeline
null
reopened
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4461/comments
https://api.github.com/repos/huggingface/datasets/issues/4461/events
https://github.com/huggingface/datasets/issues/4461
1,264,800,451
I_kwDODunzps5LY1LD
4,461
AttributeError: module 'datasets' has no attribute 'load_dataset'
{ "login": "AlexNLP", "id": 59248970, "node_id": "MDQ6VXNlcjU5MjQ4OTcw", "avatar_url": "https://avatars.githubusercontent.com/u/59248970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexNLP", "html_url": "https://github.com/AlexNLP", "followers_url": "https://api.github.com/users/AlexNLP/followers", "following_url": "https://api.github.com/users/AlexNLP/following{/other_user}", "gists_url": "https://api.github.com/users/AlexNLP/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexNLP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexNLP/subscriptions", "organizations_url": "https://api.github.com/users/AlexNLP/orgs", "repos_url": "https://api.github.com/users/AlexNLP/repos", "events_url": "https://api.github.com/users/AlexNLP/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexNLP/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,654,696,760,000
1,654,699,260,000
1,654,699,260,000
NONE
null
## Describe the bug I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4461/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4460/comments
https://api.github.com/repos/huggingface/datasets/issues/4460/events
https://github.com/huggingface/datasets/pull/4460
1,264,644,205
PR_kwDODunzps45UHIs
4,460
Drop Python 3.6 support
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I've disabled the `test_dummy_dataset_serialize_s3` tests in the Linux CI to avoid the failures (these tests only fail on Windows in 3.6). These failures are unrelated to this PR's changes, and I would like to address this in a new PR.", "[This comment](https://github.com/pytorch/audio/issues/2363#issuecomment-1179089175) explains the issue with MP3 decoding in `torchaudio` in the latest release (supports Python 3.7+). I fixed CI by pinning `torchaudio` to `<0.12.0`. Another way to fix this issue is by installing `ffmpeg` with conda or using the unofficial GH action. But I don't think it's worth making CI more complex, considering we can wait for the soundfile release, which should bring MP3 decoding, and drop the `torchaudio` dependency then.", "Yay for dropping Python 3.6!", "I think we can merge in this state. Also, if an env has Python version < 3.7 installed, we raise a warning, so I don't think we even need to create (and pin) an issue to notify the contributors of this change." ]
1,654,690,218,000
1,658,862,999,000
1,658,862,261,000
CONTRIBUTOR
null
Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4460/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4460/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4460", "html_url": "https://github.com/huggingface/datasets/pull/4460", "diff_url": "https://github.com/huggingface/datasets/pull/4460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4460.patch", "merged_at": 1658862261000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4459/comments
https://api.github.com/repos/huggingface/datasets/issues/4459/events
https://github.com/huggingface/datasets/pull/4459
1,264,636,481
PR_kwDODunzps45UFc8
4,459
Add and fix language tags for udhr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,689,822,000
1,654,691,784,000
1,654,691,233,000
MEMBER
null
Related to #4362.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4459/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4459", "html_url": "https://github.com/huggingface/datasets/pull/4459", "diff_url": "https://github.com/huggingface/datasets/pull/4459.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4459.patch", "merged_at": 1654691233000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4457/comments
https://api.github.com/repos/huggingface/datasets/issues/4457/events
https://github.com/huggingface/datasets/pull/4457
1,263,531,911
PR_kwDODunzps45QZCU
4,457
First draft of the docs for TF + Datasets
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Some links are still missing I think :)", "This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestions feel free.", "Thanks ! We plan to make a new release later today for `to_tf_dataset` FYI, so I think we can merge it soon and include this documentation in the new release" ]
1,654,618,008,000
1,655,222,921,000
1,655,222,348,000
MEMBER
null
I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4457/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4457/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4457", "html_url": "https://github.com/huggingface/datasets/pull/4457", "diff_url": "https://github.com/huggingface/datasets/pull/4457.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4457.patch", "merged_at": 1655222348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4456/comments
https://api.github.com/repos/huggingface/datasets/issues/4456/events
https://github.com/huggingface/datasets/issues/4456
1,263,241,449
I_kwDODunzps5LS4jp
4,456
Workflow for Tabular data
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "I use below to load a dataset:\r\n```\r\ndataset = datasets.load_dataset(\"scikit-learn/auto-mpg\")\r\ndf = pd.DataFrame(dataset[\"train\"])\r\n```\r\nTBH as said, tabular folk split their own dataset, they sometimes have two splits, sometimes three. Maybe somehow avoiding it for tabular datasets might be good for later. (it's just UX improvement) " ]
1,654,606,102,000
1,656,669,467,000
null
MEMBER
null
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model. In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y. Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data: - be able to load the data into X and y - be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.) - support "unsplit" datasets explicitly, instead of putting everything in "train" by default cc @adrinjalali @merveenoyan feel free to complete/correct this :) Feel free to also share ideas of APIs that would be super intuitive in your opinion !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4456/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/4456/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4455/comments
https://api.github.com/repos/huggingface/datasets/issues/4455/events
https://github.com/huggingface/datasets/pull/4455
1,263,089,067
PR_kwDODunzps45O5F9
4,455
Update data URLs in fever dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,598,454,000
1,654,673,094,000
1,654,672,577,000
MEMBER
null
As stated in their website, data owners updated their URLs on 28/04/2022. This PR updates the data URLs. Fix #4452.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4455/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4455", "html_url": "https://github.com/huggingface/datasets/pull/4455", "diff_url": "https://github.com/huggingface/datasets/pull/4455.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4455.patch", "merged_at": 1654672576000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4454/comments
https://api.github.com/repos/huggingface/datasets/issues/4454/events
https://github.com/huggingface/datasets/issues/4454
1,262,674,973
I_kwDODunzps5LQuQd
4,454
Dataset Viewer issue for Yaxin/SemEval2015
{ "login": "WithYouTo", "id": 18160852, "node_id": "MDQ6VXNlcjE4MTYwODUy", "avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WithYouTo", "html_url": "https://github.com/WithYouTo", "followers_url": "https://api.github.com/users/WithYouTo/followers", "following_url": "https://api.github.com/users/WithYouTo/following{/other_user}", "gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}", "starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions", "organizations_url": "https://api.github.com/users/WithYouTo/orgs", "repos_url": "https://api.github.com/users/WithYouTo/repos", "events_url": "https://api.github.com/users/WithYouTo/events{/privacy}", "received_events_url": "https://api.github.com/users/WithYouTo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Closing since it's a duplicate of https://github.com/huggingface/datasets/issues/4453" ]
1,654,572,706,000
1,654,602,791,000
1,654,602,791,000
NONE
null
### Link _No response_ ### Description the link could not visit ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4454/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4453/comments
https://api.github.com/repos/huggingface/datasets/issues/4453/events
https://github.com/huggingface/datasets/issues/4453
1,262,674,105
I_kwDODunzps5LQuC5
4,453
Dataset Viewer issue for Yaxin/SemEval2015
{ "login": "WithYouTo", "id": 18160852, "node_id": "MDQ6VXNlcjE4MTYwODUy", "avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WithYouTo", "html_url": "https://github.com/WithYouTo", "followers_url": "https://api.github.com/users/WithYouTo/followers", "following_url": "https://api.github.com/users/WithYouTo/following{/other_user}", "gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}", "starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions", "organizations_url": "https://api.github.com/users/WithYouTo/orgs", "repos_url": "https://api.github.com/users/WithYouTo/repos", "events_url": "https://api.github.com/users/WithYouTo/events{/privacy}", "received_events_url": "https://api.github.com/users/WithYouTo/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```", "`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps://huggingface.co/datasets/Yaxin/SemEval2015/discussions/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !", "Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`." ]
1,654,572,608,000
1,654,763,656,000
1,654,763,656,000
NONE
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4453/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4452/comments
https://api.github.com/repos/huggingface/datasets/issues/4452/events
https://github.com/huggingface/datasets/issues/4452
1,262,529,654
I_kwDODunzps5LQKx2
4,452
Trying to load FEVER dataset results in NonMatchingChecksumError
{ "login": "santhnm2", "id": 5347982, "node_id": "MDQ6VXNlcjUzNDc5ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/5347982?v=4", "gravatar_id": "", "url": "https://api.github.com/users/santhnm2", "html_url": "https://github.com/santhnm2", "followers_url": "https://api.github.com/users/santhnm2/followers", "following_url": "https://api.github.com/users/santhnm2/following{/other_user}", "gists_url": "https://api.github.com/users/santhnm2/gists{/gist_id}", "starred_url": "https://api.github.com/users/santhnm2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santhnm2/subscriptions", "organizations_url": "https://api.github.com/users/santhnm2/orgs", "repos_url": "https://api.github.com/users/santhnm2/repos", "events_url": "https://api.github.com/users/santhnm2/events{/privacy}", "received_events_url": "https://api.github.com/users/santhnm2/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting @santhnm2. We are fixing it.\r\n\r\nData owners updated their URLs recently. We have to align with them, otherwise you do not download anything (that is why ignore_verifications does not work)." ]
1,654,557,195,000
1,654,672,576,000
1,654,672,576,000
NONE
null
## Describe the bug Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`. I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', download_mode="force_redownload") # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError ``` ## Expected results I expect this call to return with no error raised. ## Actual results With `ignore_verification=False`: ``` *** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl'] ``` With `ignore_verification=True`: ``` *** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.3.dev0 - Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4452/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4451/comments
https://api.github.com/repos/huggingface/datasets/issues/4451/events
https://github.com/huggingface/datasets/pull/4451
1,262,103,323
PR_kwDODunzps45LkGc
4,451
Use newer version of multi-news with fixes
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Awesome thanks @mariosasko!" ]
1,654,534,628,000
1,654,623,601,000
1,654,622,084,000
CONTRIBUTOR
null
Closes #4430.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4451/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4451", "html_url": "https://github.com/huggingface/datasets/pull/4451", "diff_url": "https://github.com/huggingface/datasets/pull/4451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4451.patch", "merged_at": 1654622084000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4450/comments
https://api.github.com/repos/huggingface/datasets/issues/4450/events
https://github.com/huggingface/datasets/pull/4450
1,261,878,324
PR_kwDODunzps45Kzwh
4,450
Update README.md of fquad
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,523,561,000
1,654,527,109,000
1,654,526,583,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4450/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4450", "html_url": "https://github.com/huggingface/datasets/pull/4450", "diff_url": "https://github.com/huggingface/datasets/pull/4450.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4450.patch", "merged_at": 1654526583000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4449/comments
https://api.github.com/repos/huggingface/datasets/issues/4449/events
https://github.com/huggingface/datasets/issues/4449
1,261,262,326
I_kwDODunzps5LLVX2
4,449
Rj
{ "login": "Aeckard45", "id": 87345839, "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aeckard45", "html_url": "https://github.com/Aeckard45", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "repos_url": "https://api.github.com/users/Aeckard45/repos", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,654,482,272,000
1,654,530,290,000
1,654,530,290,000
NONE
null
import android.content.DialogInterface; import android.database.Cursor; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import androidx.appcompat.app.AlertDialog; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { private EditText editTextID; private EditText editTextName; private EditText editTextNum; private String name; private int number; private String ID; private dbHelper db; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); db = new dbHelper(this); editTextID = findViewById(R.id.editText1); editTextName = findViewById(R.id.editText2); editTextNum = findViewById(R.id.editText3); Button buttonSave = findViewById(R.id.button); Button buttonRead = findViewById(R.id.button2); Button buttonUpdate = findViewById(R.id.button3); Button buttonDelete = findViewById(R.id.button4); Button buttonSearch = findViewById(R.id.button5); Button buttonDeleteAll = findViewById(R.id.button6); buttonSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); if (name.isEmpty() || num.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Insert Data db.insertData(name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonRead.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { final ArrayAdapter<String> adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1); String name; String num; String id; try { Cursor cursor = db.readData(); if (cursor != null && cursor.getCount() > 0) { while (cursor.moveToNext()) { id = cursor.getString(0); // get data in column index 0 name = cursor.getString(1); // get data in column index 1 num = cursor.getString(2); // get data in column index 2 // Add SQLite data to listView adapter.add("ID :- " + id + "\n" + "Name :- " + name + "\n" + "Number :- " + num + "\n\n"); } } else { adapter.add("No Data"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } // show the saved data in alertDialog AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setTitle("SQLite saved data"); builder.setIcon(R.mipmap.app_icon_foreground); builder.setAdapter(adapter, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }); builder.setPositiveButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonUpdate.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); ID = editTextID.getText().toString(); if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Update Data db.updateData(ID, name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDelete.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Delete Data db.deleteData(ID); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDeleteAll.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Delete all data // You can simply delete all the data by calling this method --> db.deleteAllData(); // You can try this also AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setIcon(R.mipmap.app_icon_foreground); builder.setTitle("Delete All Data"); builder.setCancelable(false); builder.setMessage("Do you really need to delete your all data ?"); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // User confirmed , now you can delete the data db.deleteAllData(); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // user not confirmed dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonSearch.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Search data Cursor cursor = db.searchData(ID); if (cursor.moveToFirst()) { editTextName.setText(cursor.getString(1)); editTextNum.setText(cursor.getString(2)); Toast.makeText(MainActivity.this, "Data successfully searched", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(MainActivity.this, "ID not found", Toast.LENGTH_SHORT).show(); editTextNum.setText("ID Not found"); editTextName.setText("ID not found"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } } } }); } }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4449/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4448/comments
https://api.github.com/repos/huggingface/datasets/issues/4448/events
https://github.com/huggingface/datasets/issues/4448
1,260,966,129
I_kwDODunzps5LKNDx
4,448
New Preprocessing Feature - Deduplication [Request]
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the native PyArrow ops.\r\n\r\n(Btw, this is a duplicate of https://github.com/huggingface/datasets/issues/2514)" ]
1,654,407,176,000
1,654,442,067,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time. A feature that allows one to easily deduplicate a dataset can be cool! **Describe the solution you'd like** We can define a function and keep only the first/last data-point that yields the value according to this function. **Describe alternatives you've considered** The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4448/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4447/comments
https://api.github.com/repos/huggingface/datasets/issues/4447/events
https://github.com/huggingface/datasets/pull/4447
1,260,041,805
PR_kwDODunzps45E4A-
4,447
Minor fixes/improvements in `scene_parse_150` card
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,269,754,000
1,654,530,625,000
1,654,530,097,000
CONTRIBUTOR
null
Add `paperswithcode_id` and fix some links in the `scene_parse_150` card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4447/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4447", "html_url": "https://github.com/huggingface/datasets/pull/4447", "diff_url": "https://github.com/huggingface/datasets/pull/4447.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4447.patch", "merged_at": 1654530097000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4446/comments
https://api.github.com/repos/huggingface/datasets/issues/4446/events
https://github.com/huggingface/datasets/pull/4446
1,260,028,995
PR_kwDODunzps45E1Qb
4,446
Add missing kwargs to docstrings
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,269,027,000
1,654,272,609,000
1,654,272,089,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4446/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4446/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4446", "html_url": "https://github.com/huggingface/datasets/pull/4446", "diff_url": "https://github.com/huggingface/datasets/pull/4446.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4446.patch", "merged_at": 1654272089000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4445/comments
https://api.github.com/repos/huggingface/datasets/issues/4445/events
https://github.com/huggingface/datasets/pull/4445
1,259,947,568
PR_kwDODunzps45EjtA
4,445
Fix missing args in docstring of load_dataset_builder
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,264,550,000
1,654,266,932,000
1,654,266,429,000
MEMBER
null
Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other): - https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4445/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4445", "html_url": "https://github.com/huggingface/datasets/pull/4445", "diff_url": "https://github.com/huggingface/datasets/pull/4445.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4445.patch", "merged_at": 1654266429000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4444/comments
https://api.github.com/repos/huggingface/datasets/issues/4444/events
https://github.com/huggingface/datasets/pull/4444
1,259,738,209
PR_kwDODunzps45D2XX
4,444
Fix kwargs in docstrings
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,252,142,000
1,654,254,088,000
1,654,253,566,000
MEMBER
null
To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards. See: - huggingface/doc-builder/issues/235
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4444/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4444", "html_url": "https://github.com/huggingface/datasets/pull/4444", "diff_url": "https://github.com/huggingface/datasets/pull/4444.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4444.patch", "merged_at": 1654253566000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4443/comments
https://api.github.com/repos/huggingface/datasets/issues/4443/events
https://github.com/huggingface/datasets/issues/4443
1,259,606,334
I_kwDODunzps5LFBE-
4,443
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
{ "login": "ZYMXIXI", "id": 32382826, "node_id": "MDQ6VXNlcjMyMzgyODI2", "avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZYMXIXI", "html_url": "https://github.com/ZYMXIXI", "followers_url": "https://api.github.com/users/ZYMXIXI/followers", "following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}", "gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions", "organizations_url": "https://api.github.com/users/ZYMXIXI/orgs", "repos_url": "https://api.github.com/users/ZYMXIXI/repos", "events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}", "received_events_url": "https://api.github.com/users/ZYMXIXI/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.", "I'm having a look.", "Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L260\r\n - \"test key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```", "Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km", "Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ", "I've opened a Discussion page, so that we can ask/answer and propose fixes until the script works properly: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/discussions/1\r\n\r\nCC: @julien-c @jacobbieker " ]
1,654,244,236,000
1,654,590,232,000
null
NONE
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4443/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4442/comments
https://api.github.com/repos/huggingface/datasets/issues/4442/events
https://github.com/huggingface/datasets/issues/4442
1,258,589,276
I_kwDODunzps5LBIxc
4,442
Dataset Viewer issue for amazon_polarity
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks, looking at it", "Not sure what happened ๐Ÿ˜ฌ, but it's fixed" ]
1,654,197,518,000
1,654,627,837,000
1,654,627,837,000
MEMBER
null
### Link https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test ### Description For some reason the train split is OK but the test split is not for this dataset: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py' ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4442/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4441/comments
https://api.github.com/repos/huggingface/datasets/issues/4441/events
https://github.com/huggingface/datasets/issues/4441
1,258,568,656
I_kwDODunzps5LBDvQ
4,441
Dataset Viewer issue for aeslc
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Not sure what happened ๐Ÿ˜ฌ, but it's fixed" ]
1,654,196,232,000
1,654,627,855,000
1,654,627,855,000
MEMBER
null
### Link https://huggingface.co/datasets/aeslc ### Description The dataset viewer can't find `dataset_infos.json` in it's cache: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json' ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4441/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4440/comments
https://api.github.com/repos/huggingface/datasets/issues/4440/events
https://github.com/huggingface/datasets/pull/4440
1,258,494,469
PR_kwDODunzps44_io_
4,440
Update docs around audio and vision
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?\r\n\r\nWe plan to address this with end-to-end examples (for each modality) more focused on preprocessing than the ones in the Transformers docs." ]
1,654,191,723,000
1,656,001,999,000
1,656,001,382,000
MEMBER
null
As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs. Other changes include: - Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's. - Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!). - Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect. - Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text. Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4440/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4440/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4440", "html_url": "https://github.com/huggingface/datasets/pull/4440", "diff_url": "https://github.com/huggingface/datasets/pull/4440.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4440.patch", "merged_at": 1656001382000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4439/comments
https://api.github.com/repos/huggingface/datasets/issues/4439/events
https://github.com/huggingface/datasets/issues/4439
1,258,434,111
I_kwDODunzps5LAi4_
4,439
TIMIT won't load after manual download: Errors about files that don't exist
{ "login": "drscotthawley", "id": 13925685, "node_id": "MDQ6VXNlcjEzOTI1Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drscotthawley", "html_url": "https://github.com/drscotthawley", "followers_url": "https://api.github.com/users/drscotthawley/followers", "following_url": "https://api.github.com/users/drscotthawley/following{/other_user}", "gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}", "starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions", "organizations_url": "https://api.github.com/users/drscotthawley/orgs", "repos_url": "https://api.github.com/users/drscotthawley/repos", "events_url": "https://api.github.com/users/drscotthawley/events{/privacy}", "received_events_url": "https://api.github.com/users/drscotthawley/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436", "Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wait for the next release.\r\n", "I'm closing this issue then. Please, feel free to reopen it again if the problem persists." ]
1,654,187,756,000
1,654,245,857,000
1,654,245,856,000
NONE
null
## Describe the bug I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT: ## Steps to reproduce the bug ```python data = load_dataset('timit_asr', 'clean')['train'] ``` ## Expected results The dataset should load with no errors. ## Actual results This error message: ``` File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls raise FileNotFoundError(error_msg) FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place? The files in the dataset look like the following: ``` ยณ PHONCODE.DOC ยณ PROMPTS.TXT ยณ SPKRINFO.TXT ยณ SPKRSENT.TXT ยณ TESTSET.DOC ``` ...so why are these being excluded by the dataset loader? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27 - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4439/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4438
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4438/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4438/comments
https://api.github.com/repos/huggingface/datasets/issues/4438/events
https://github.com/huggingface/datasets/pull/4438
1,258,255,394
PR_kwDODunzps44-vhC
4,438
Fix docstring of inspect_dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,179,670,000
1,654,188,055,000
1,654,187,547,000
MEMBER
null
As pointed out by @sgugger: - huggingface/doc-builder/issues/235
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4438/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4438", "html_url": "https://github.com/huggingface/datasets/pull/4438", "diff_url": "https://github.com/huggingface/datasets/pull/4438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4438.patch", "merged_at": 1654187547000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4437/comments
https://api.github.com/repos/huggingface/datasets/issues/4437/events
https://github.com/huggingface/datasets/pull/4437
1,258,249,582
PR_kwDODunzps44-uRW
4,437
Add missing columns to `blended_skill_talk`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,179,386,000
1,654,530,596,000
1,654,530,085,000
CONTRIBUTOR
null
Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py). Fix #4426
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4437/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4437", "html_url": "https://github.com/huggingface/datasets/pull/4437", "diff_url": "https://github.com/huggingface/datasets/pull/4437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4437.patch", "merged_at": 1654530085000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4436/comments
https://api.github.com/repos/huggingface/datasets/issues/4436/events
https://github.com/huggingface/datasets/pull/4436
1,257,758,834
PR_kwDODunzps449FsU
4,436
Fix directory names for LDC data in timit_asr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,152,304,000
1,654,162,376,000
1,654,161,867,000
MEMBER
null
Related to: - #4422
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4436/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4436", "html_url": "https://github.com/huggingface/datasets/pull/4436", "diff_url": "https://github.com/huggingface/datasets/pull/4436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4436.patch", "merged_at": 1654161867000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4435/comments
https://api.github.com/repos/huggingface/datasets/issues/4435/events
https://github.com/huggingface/datasets/issues/4435
1,257,496,552
I_kwDODunzps5K89_o
4,435
Load a local cached dataset that has been modified
{ "login": "mihail911", "id": 2789441, "node_id": "MDQ6VXNlcjI3ODk0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mihail911", "html_url": "https://github.com/mihail911", "followers_url": "https://api.github.com/users/mihail911/followers", "following_url": "https://api.github.com/users/mihail911/following{/other_user}", "gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}", "starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mihail911/subscriptions", "organizations_url": "https://api.github.com/users/mihail911/orgs", "repos_url": "https://api.github.com/users/mihail911/repos", "events_url": "https://api.github.com/users/mihail911/events{/privacy}", "received_events_url": "https://api.github.com/users/mihail911/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.", "Awesome, hvala Mario! This works. " ]
1,654,134,709,000
1,654,214,366,000
1,654,214,358,000
NONE
null
## Describe the bug I have loaded a dataset as follows: ``` d = load_dataset("emotion", split="validation") ``` Afterwards I make some modifications to the dataset via a `map` call: ``` d.map(some_update_func, cache_file_name=modified_dataset) ``` This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns: ``` modified_dataset dataset_info.json emotion-test.arrow emotion-train.arrow emotion-validation.arrow ``` as expected. However, when I try to load up the modified cached dataset via a call to ``` modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset") ``` it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset: ``` Using custom data configuration validation-cdbf51685638421b Downloading and preparing dataset emotion/validation to ... ``` How am I supposed to load the original modified local cache copy of the dataset? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4435/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4434/comments
https://api.github.com/repos/huggingface/datasets/issues/4434/events
https://github.com/huggingface/datasets/pull/4434
1,256,207,321
PR_kwDODunzps443mAr
4,434
Fix dummy dataset generation script for handling nested types of _URLs
{ "login": "silverriver", "id": 2529049, "node_id": "MDQ6VXNlcjI1MjkwNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverriver", "html_url": "https://github.com/silverriver", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "organizations_url": "https://api.github.com/users/silverriver/orgs", "repos_url": "https://api.github.com/users/silverriver/repos", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "received_events_url": "https://api.github.com/users/silverriver/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,654,095,195,000
1,654,603,708,000
1,654,593,849,000
CONTRIBUTOR
null
It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset. I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types. Linked to issue #4428 PS: I am not sure whether my code fix this issue in a proper way.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4434/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4434", "html_url": "https://github.com/huggingface/datasets/pull/4434", "diff_url": "https://github.com/huggingface/datasets/pull/4434.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4434.patch", "merged_at": 1654593849000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4433/comments
https://api.github.com/repos/huggingface/datasets/issues/4433/events
https://github.com/huggingface/datasets/pull/4433
1,255,830,758
PR_kwDODunzps442P5L
4,433
Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added back the `[:]` and a comment to explain why this is needed. " ]
1,654,085,396,000
1,654,770,894,000
1,654,770,367,000
CONTRIBUTOR
null
Fix #4348
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4433/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4433", "html_url": "https://github.com/huggingface/datasets/pull/4433", "diff_url": "https://github.com/huggingface/datasets/pull/4433.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4433.patch", "merged_at": 1654770366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4432/comments
https://api.github.com/repos/huggingface/datasets/issues/4432/events
https://github.com/huggingface/datasets/pull/4432
1,255,523,720
PR_kwDODunzps441JmK
4,432
Fix builder docstring
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,076,730,000
1,654,191,827,000
1,654,191,315,000
MEMBER
null
Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4432/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4432", "html_url": "https://github.com/huggingface/datasets/pull/4432", "diff_url": "https://github.com/huggingface/datasets/pull/4432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4432.patch", "merged_at": 1654191315000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4431/comments
https://api.github.com/repos/huggingface/datasets/issues/4431/events
https://github.com/huggingface/datasets/pull/4431
1,254,618,948
PR_kwDODunzps44x5aG
4,431
Add personaldialog datasets
{ "login": "silverriver", "id": 2529049, "node_id": "MDQ6VXNlcjI1MjkwNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverriver", "html_url": "https://github.com/silverriver", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "organizations_url": "https://api.github.com/users/silverriver/orgs", "repos_url": "https://api.github.com/users/silverriver/repos", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "received_events_url": "https://api.github.com/users/silverriver/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "These test errors are related to issue #4428 \r\n", "_The documentation is not available anymore as the PR was closed or merged._", "I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.", "> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co/datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?", "Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co/datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co/datasets/silver/personal_dialog as well" ]
1,654,046,440,000
1,654,951,223,000
1,654,950,676,000
CONTRIBUTOR
null
It seems that all tests are passed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4431/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4431", "html_url": "https://github.com/huggingface/datasets/pull/4431", "diff_url": "https://github.com/huggingface/datasets/pull/4431.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4431.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4430/comments
https://api.github.com/repos/huggingface/datasets/issues/4430/events
https://github.com/huggingface/datasets/issues/4430
1,254,412,591
I_kwDODunzps5KxNEv
4,430
Add ability to load newer, cleaner version of Multi-News
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.", "@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train/dev/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?", "Yes! Let me help you with more detailed instructions.\r\n\r\nIn the first step, we need to update the URLs. One of the possible dictionary structures is as follows:\r\n```python\r\n_URLs = {\r\n \"train\": {\"src\": \"https://drive.google.com/uc?export=download&id=1wHAWDOwOoQWSj7HYpyJ3Aeud8WhhaJ7P\", \"tgt\": \"https://drive.google.com/uc?export=download&id=1QVgswwhVTkd3VLCzajK6eVkcrSWEK6kq\"}\r\n \"val\": ...\r\n \"test\": ...\r\n}\r\n```\r\n\r\n(You can use this page to generate direct download links: https://sites.google.com/site/gdocs2direct/)\r\n\r\nThen we move to the `split_generators` method:\r\n```python\r\ndef _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n files = dl_manager.download(_URLs)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\"src_file\": files[\"train\"][\"src\"], \"tgt_file\": files[\"train\"][\"tgt\"]},\r\n ),\r\n ... # same for val and test\r\n ]\r\n```\r\nFinally, we adjust the signature of `_generate_examples`:\r\n```python\r\ndef _generate_examples(self, src_file, tgt_file):\r\n \"\"\"Yields examples.\"\"\"\r\n with open(src_file, encoding=\"utf-8\") as src_f, open(\r\n tgt_file, encoding=\"utf-8\"\r\n ) as tgt_f:\r\n ... # the rest is the same\r\n```\r\n\r\nAnd that's it!\r\n\r\nPS: Let me know if you need help updating the dummy data and regenerating the metadata file.", "Awesome! Thanks for the detailed help, that was straightforward with your instruction. However, I think I am being blocked by this issue: https://github.com/huggingface/datasets/issues/4428", "Feel free to open a PR, and I can fix this manually.", "Awsome, done in #4451!" ]
1,654,030,844,000
1,654,622,084,000
1,654,622,084,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq). Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility. **Describe the solution you'd like** Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues. **Describe alternatives you've considered** Replace the current URL to the original version to the dataset with the URL to the version with fixes. **Additional context** Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4430/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4429
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4429/comments
https://api.github.com/repos/huggingface/datasets/issues/4429/events
https://github.com/huggingface/datasets/pull/4429
1,254,184,358
PR_kwDODunzps44whxN
4,429
Update builder docstring for deprecated/added arguments
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mishig25 is investigating why deprecated/added do not affect the enclosed text format when used in args docstring: no special formatting appears: \r\n- https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder", "@albertvillanova please check now ๐Ÿ‘ \r\nhttps://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder\r\n\r\n<img width=\"500\" alt=\"Screenshot 2022-06-06 at 10 20 34\" src=\"https://user-images.githubusercontent.com/11827707/172123471-fab97138-c903-4a71-ab7f-c90e5e43c58f.png\">\r\n", "Thanks @mishig25.\r\n\r\nJust one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?", "> Just one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?\r\n\r\nYes, that is expected ๐Ÿ˜Š because the depreacted box is being bounded by its parent box (the box for `name` argument in the screenshot above)" ]
1,654,018,645,000
1,654,688,418,000
1,654,687,905,000
MEMBER
null
This PR updates the builder docstring with deprecated/added directives for arguments name/config_name. Follow up of: - #4414 - huggingface/doc-builder#233 First merge: - #4432
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4429/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4429", "html_url": "https://github.com/huggingface/datasets/pull/4429", "diff_url": "https://github.com/huggingface/datasets/pull/4429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4429.patch", "merged_at": 1654687905000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4428/comments
https://api.github.com/repos/huggingface/datasets/issues/4428/events
https://github.com/huggingface/datasets/issues/4428
1,254,092,818
I_kwDODunzps5Kv_AS
4,428
Errors when building dummy data if you use nested _URLS
{ "login": "silverriver", "id": 2529049, "node_id": "MDQ6VXNlcjI1MjkwNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverriver", "html_url": "https://github.com/silverriver", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "organizations_url": "https://api.github.com/users/silverriver/orgs", "repos_url": "https://api.github.com/users/silverriver/repos", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "received_events_url": "https://api.github.com/users/silverriver/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,654,013,457,000
1,654,593,849,000
1,654,593,849,000
CONTRIBUTOR
null
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run self._autogenerate_dummy_data( File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data dataset_builder._split_generators(dl_manager) File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators data_dir = dl_manager.download_and_extract(urls) File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract dummy_output = self.mock_download_manager.download(url_or_urls) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download return self.download_and_extract(data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract return self.create_dummy_data_dict(dummy_file, data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): TypeError: unhashable type: 'list' ## Steps to reproduce the bug You can use my dataset script implemented here: https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py ```python datasets_cli dummy_data datasets/personal_dialog --auto_generate ``` You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54 to ``` "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz" ``` before runing the above script to avoid downloading a large training data. ## Expected results The dummy data should be generated ## Actual results An error is raised. It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 We only check if the first item of dummy_data_dict.values() is str. However, dummy_data_dict.values() may have the type of [str, list, list]. A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to ```python if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): ``` But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.10 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4428/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4427/comments
https://api.github.com/repos/huggingface/datasets/issues/4427/events
https://github.com/huggingface/datasets/pull/4427
1,253,959,313
PR_kwDODunzps44vyGg
4,427
Add HF.co for PRs/Issues for specific datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654,007,481,000
1,654,087,062,000
1,654,086,542,000
MEMBER
null
As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4427/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4427", "html_url": "https://github.com/huggingface/datasets/pull/4427", "diff_url": "https://github.com/huggingface/datasets/pull/4427.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4427.patch", "merged_at": 1654086542000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4426/comments
https://api.github.com/repos/huggingface/datasets/issues/4426/events
https://github.com/huggingface/datasets/issues/4426
1,253,887,311
I_kwDODunzps5KvM1P
4,426
Add loading variable number of columns for different splits
{ "login": "DrMatters", "id": 22641583, "node_id": "MDQ6VXNlcjIyNjQxNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrMatters", "html_url": "https://github.com/DrMatters", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "repos_url": "https://api.github.com/users/DrMatters/repos", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. " ]
1,654,004,416,000
1,654,273,525,000
1,654,273,525,000
NONE
null
**Is your feature request related to a problem? Please describe.** The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have. When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4426/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4425/comments
https://api.github.com/repos/huggingface/datasets/issues/4425/events
https://github.com/huggingface/datasets/pull/4425
1,253,641,604
PR_kwDODunzps44uuDq
4,425
Make extensions case-insensitive in timit_asr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,653,991,804,000
1,654,092,930,000
1,654,092,411,000
MEMBER
null
Related to #4422.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4425/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4425", "html_url": "https://github.com/huggingface/datasets/pull/4425", "diff_url": "https://github.com/huggingface/datasets/pull/4425.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4425.patch", "merged_at": 1654092411000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4424/comments
https://api.github.com/repos/huggingface/datasets/issues/4424/events
https://github.com/huggingface/datasets/pull/4424
1,253,542,488
PR_kwDODunzps44uZBD
4,424
Fix DuplicatedKeysError in timit_asr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,653,986,865,000
1,654,005,050,000
1,654,004,551,000
MEMBER
null
Fix #4422.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4424/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4424", "html_url": "https://github.com/huggingface/datasets/pull/4424", "diff_url": "https://github.com/huggingface/datasets/pull/4424.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4424.patch", "merged_at": 1654004551000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4423
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4423/comments
https://api.github.com/repos/huggingface/datasets/issues/4423/events
https://github.com/huggingface/datasets/pull/4423
1,253,326,023
PR_kwDODunzps44trdP
4,423
Add new dataset MMChat
{ "login": "silverriver", "id": 2529049, "node_id": "MDQ6VXNlcjI1MjkwNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverriver", "html_url": "https://github.com/silverriver", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "organizations_url": "https://api.github.com/users/silverriver/orgs", "repos_url": "https://api.github.com/users/silverriver/repos", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "received_events_url": "https://api.github.com/users/silverriver/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! As for https://github.com/huggingface/datasets/pull/4431 please also update the licensing section in https://huggingface.co/datasets/silver/mmchat ;)\r\n\r\nThen if it's fine for you feel free to close this PR" ]
1,653,972,307,000
1,654,951,252,000
1,654,950,702,000
CONTRIBUTOR
null
Hi, I am adding a new dataset MMChat. It seems that all tests are passed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4423/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4423", "html_url": "https://github.com/huggingface/datasets/pull/4423", "diff_url": "https://github.com/huggingface/datasets/pull/4423.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4423.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4422/comments
https://api.github.com/repos/huggingface/datasets/issues/4422/events
https://github.com/huggingface/datasets/issues/4422
1,253,146,511
I_kwDODunzps5KsX-P
4,422
Cannot load timit_asr data set
{ "login": "bhaddow", "id": 992795, "node_id": "MDQ6VXNlcjk5Mjc5NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhaddow", "html_url": "https://github.com/bhaddow", "followers_url": "https://api.github.com/users/bhaddow/followers", "following_url": "https://api.github.com/users/bhaddow/following{/other_user}", "gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions", "organizations_url": "https://api.github.com/users/bhaddow/orgs", "repos_url": "https://api.github.com/users/bhaddow/repos", "events_url": "https://api.github.com/users/bhaddow/events{/privacy}", "received_events_url": "https://api.github.com/users/bhaddow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.", "Thanks for the quick fix!", "@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ", "Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.", "Ah, if I change the train/ and test/ directories to TRAIN/ and TEST/ then it works!", "Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN/train and TEST/test directory names." ]
1,653,948,022,000
1,654,151,645,000
1,654,004,551,000
NONE
null
## Describe the bug I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all. ## Steps to reproduce the bug ```python timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset") # Sample code to reproduce the bug ``` ## Expected results The data set should load without error. It worked for me before the LDC url change. ## Actual results ``` datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: SA1 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4422/timeline
null
completed
null
null
false