url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
778M
1.87B
node_id
stringlengths
18
32
number
int64
1.68k
6.18k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1778/comments
https://api.github.com/repos/huggingface/datasets/issues/1778/events
https://github.com/huggingface/datasets/pull/1778
793,474,507
MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1
1,778
Narrative QA Manual
{ "avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4", "events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}", "followers_url": "https://api.github.com/users/rsanjaykamath/followers", "following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}", "gists_url": "https://api.github.com/users/rsanjaykamath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rsanjaykamath", "id": 18527321, "login": "rsanjaykamath", "node_id": "MDQ6VXNlcjE4NTI3MzIx", "organizations_url": "https://api.github.com/users/rsanjaykamath/orgs", "received_events_url": "https://api.github.com/users/rsanjaykamath/received_events", "repos_url": "https://api.github.com/users/rsanjaykamath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rsanjaykamath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rsanjaykamath/subscriptions", "type": "User", "url": "https://api.github.com/users/rsanjaykamath" }
[]
closed
false
null
[]
null
[ "@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364", "Excellent comments. Thanks for those valuable suggestions. I changed everything as you have pointed out :) ", "I've copied the same template as NarrativeQA now. Please let me know if this is fine. ", "> Awesome thank you !!\r\n> This looks all good :)\r\n> \r\n> Just before we merge, I was wondering if you knew why the number of examples in the train set went from 1102 to 32747 in your last commit ? I can't see why the changes in the code would cause such a big difference\r\n\r\nOk the change was the way I presented the data. \r\nIn my previous code, I presented a story with a list of questions-answers related to the story per sample. So the total 1102 was the number of stories (not questions) in the train set. \r\n\r\nIn the case of `NarrativeQA`, the code presented each sample data with one single question. So the story gets replicated as many times based on number of questions per story. I felt this was not really memory efficient so I had coded the way I did earlier. \r\n\r\nBut since this would be inconsistent as you pointed out, I modified my code to suit the `NarrativeQA` approach. Hope it's clear now :) ", "Ok I see ! that makes sense", "Thanks for your time and helping me with all this :) Really appreciate the hardwork you guys do. " ]
"2021-01-25T15:22:31Z"
"2021-01-29T09:35:14Z"
"2021-01-29T09:34:51Z"
CONTRIBUTOR
null
Submitting the manual version of Narrative QA script which requires a manual download from the original repository
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1778/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1778.diff", "html_url": "https://github.com/huggingface/datasets/pull/1778", "merged_at": "2021-01-29T09:34:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/1778.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1778" }
true
https://api.github.com/repos/huggingface/datasets/issues/1777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1777/comments
https://api.github.com/repos/huggingface/datasets/issues/1777/events
https://github.com/huggingface/datasets/issues/1777
793,273,770
MDU6SXNzdWU3OTMyNzM3NzA=
1,777
GPT2 MNLI training using run_glue.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4", "events_url": "https://api.github.com/users/nlp-student/events{/privacy}", "followers_url": "https://api.github.com/users/nlp-student/followers", "following_url": "https://api.github.com/users/nlp-student/following{/other_user}", "gists_url": "https://api.github.com/users/nlp-student/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nlp-student", "id": 76427077, "login": "nlp-student", "node_id": "MDQ6VXNlcjc2NDI3MDc3", "organizations_url": "https://api.github.com/users/nlp-student/orgs", "received_events_url": "https://api.github.com/users/nlp-student/received_events", "repos_url": "https://api.github.com/users/nlp-student/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nlp-student/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nlp-student/subscriptions", "type": "User", "url": "https://api.github.com/users/nlp-student" }
[]
closed
false
null
[]
null
[]
"2021-01-25T10:53:52Z"
"2021-01-25T11:12:53Z"
"2021-01-25T11:12:53Z"
NONE
null
Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets` Running this on Google Colab, ``` !python run_glue.py \ --model_name_or_path gpt2 \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 10 \ --gradient_accumulation_steps 32\ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir models/gpt2/mnli/ ``` I get the following error, ``` "Asking to pad but the tokenizer does not have a padding token. " ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. ``` Do I need to modify the trainer to work with GPT2 ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1777/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1777/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1776/comments
https://api.github.com/repos/huggingface/datasets/issues/1776/events
https://github.com/huggingface/datasets/issues/1776
792,755,249
MDU6SXNzdWU3OTI3NTUyNDk=
1,776
[Question & Bug Report] Can we preprocess a dataset on the fly?
{ "avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4", "events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}", "followers_url": "https://api.github.com/users/shuaihuaiyi/followers", "following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}", "gists_url": "https://api.github.com/users/shuaihuaiyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shuaihuaiyi", "id": 14048129, "login": "shuaihuaiyi", "node_id": "MDQ6VXNlcjE0MDQ4MTI5", "organizations_url": "https://api.github.com/users/shuaihuaiyi/orgs", "received_events_url": "https://api.github.com/users/shuaihuaiyi/received_events", "repos_url": "https://api.github.com/users/shuaihuaiyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shuaihuaiyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shuaihuaiyi/subscriptions", "type": "User", "url": "https://api.github.com/users/shuaihuaiyi" }
[]
closed
false
null
[]
null
[ "We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?", "It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py", "Indeed I will submit a PR in a fez days to enable processing on-the-fly :)\r\nThis can be useful in language modeling for tokenization, padding etc.\r\n", "any update on this issue? ...really look forward to use it ", "Hi @acul3,\r\n\r\nPlease look at the discussion on a related Issue #1825. I think using `set_transform` after building from source should do.", "@gchhablani thank you so much\r\n\r\nwill try look at it" ]
"2021-01-24T09:28:24Z"
"2021-05-20T04:15:58Z"
"2021-05-20T04:15:58Z"
NONE
null
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_size`. Seems that argument doesn't have any effect when it's larger than `batch_size`, because you are saving all the batch instantly after it's processed. Please check the following code: https://github.com/huggingface/datasets/blob/0281f9d881f3a55c89aeaa642f1ba23444b64083/src/datasets/arrow_dataset.py#L1532
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1776/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1775/comments
https://api.github.com/repos/huggingface/datasets/issues/1775/events
https://github.com/huggingface/datasets/issues/1775
792,742,120
MDU6SXNzdWU3OTI3NDIxMjA=
1,775
Efficient ways to iterate the dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4", "events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}", "followers_url": "https://api.github.com/users/zhongpeixiang/followers", "following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}", "gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhongpeixiang", "id": 11826803, "login": "zhongpeixiang", "node_id": "MDQ6VXNlcjExODI2ODAz", "organizations_url": "https://api.github.com/users/zhongpeixiang/orgs", "received_events_url": "https://api.github.com/users/zhongpeixiang/received_events", "repos_url": "https://api.github.com/users/zhongpeixiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions", "type": "User", "url": "https://api.github.com/users/zhongpeixiang" }
[]
closed
false
null
[]
null
[ "It seems that selecting a subset of colums directly from the dataset, i.e., dataset[\"column\"], is slow.", "I was wrong, ```dataset[\"column\"]``` is fast." ]
"2021-01-24T07:54:31Z"
"2021-01-24T09:50:39Z"
"2021-01-24T09:50:39Z"
CONTRIBUTOR
null
For a large dataset that does not fits the memory, how can I select only a subset of features from each example? If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this? Thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1775/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1775/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1774/comments
https://api.github.com/repos/huggingface/datasets/issues/1774/events
https://github.com/huggingface/datasets/issues/1774
792,730,559
MDU6SXNzdWU3OTI3MzA1NTk=
1,774
is it possible to make slice to be more compatible like python list and numpy?
{ "avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4", "events_url": "https://api.github.com/users/world2vec/events{/privacy}", "followers_url": "https://api.github.com/users/world2vec/followers", "following_url": "https://api.github.com/users/world2vec/following{/other_user}", "gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/world2vec", "id": 7607120, "login": "world2vec", "node_id": "MDQ6VXNlcjc2MDcxMjA=", "organizations_url": "https://api.github.com/users/world2vec/orgs", "received_events_url": "https://api.github.com/users/world2vec/received_events", "repos_url": "https://api.github.com/users/world2vec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/world2vec/subscriptions", "type": "User", "url": "https://api.github.com/users/world2vec" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "Hi ! Thanks for reporting.\r\nI am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing.\r\nIf you have some code to reproduce the issue it would be nice so I can make sure that this case will be supported :)\r\nI'll make a PR in a few days ", "Good if you can take care at your side.\r\nHere is the [colab notebook](https://colab.research.google.com/drive/19c-abm87RTRYgW9G1D8ktfwRW95zDYBZ?usp=sharing)" ]
"2021-01-24T06:15:52Z"
"2022-06-01T15:54:50Z"
null
NONE
null
Hi, see below error: ``` AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples. ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1774/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1773/comments
https://api.github.com/repos/huggingface/datasets/issues/1773/events
https://github.com/huggingface/datasets/issues/1773
792,708,160
MDU6SXNzdWU3OTI3MDgxNjA=
1,773
bug in loading datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost" }
[]
closed
false
null
[]
null
[ "Looks like an issue with your csv file. Did you use the right delimiter ?\r\nApparently at line 37 the CSV reader from pandas reads 2 fields instead of 1.", "Note that you can pass any argument you would pass to `pandas.read_csv` as kwargs to `load_dataset`. For example you can do\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files=data_files, sep=\"\\t\")\r\n```\r\n\r\nfor example to use a tab separator.\r\n\r\nYou can see the full list of arguments here: https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/csv/csv.py\r\n\r\n(I've not found the list in the documentation though, we definitely must add them !)", "You can try to convert the file to (CSV UTF-8)" ]
"2021-01-24T02:53:45Z"
"2021-09-06T08:54:46Z"
"2021-08-04T18:13:01Z"
NONE
null
Hi, I need to load a dataset, I use these commands: ``` from datasets import load_dataset dataset = load_dataset('csv', data_files={'train': 'sick/train.csv', 'test': 'sick/test.csv', 'validation': 'sick/validation.csv'}) print(dataset['validation']) ``` the dataset in sick/train.csv are simple csv files representing the data. I am getting this error, do you have an idea how I can solve this? thank you @lhoestq ``` Using custom data configuration default Downloading and preparing dataset csv/default-61468fc71a743ec1 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /julia/cache_home_2/datasets/csv/default-61468fc71a743ec1/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2... Traceback (most recent call last): File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 485, in incomplete_dir yield tmp_dir File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 604, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 959, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/tqdm-4.49.0-py3.7.egg/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/julia/cache_home_2/modules/datasets_modules/datasets/csv/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/csv.py", line 129, in _generate_tables for batch_idx, df in enumerate(csv_file_reader): File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 1029, in __next__ return self.get_chunk() File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 1079, in get_chunk return self.read(nrows=size) File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 1052, in read index, columns, col_dict = self._engine.read(nrows) File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 2056, in read data = self._reader.read(nrows) File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 37, saw 2 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "write_sick.py", line 19, in <module> 'validation': 'sick/validation.csv'}) File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 534, in download_and_prepare self._save_info() File "/julia/libs/anaconda3/envs/success/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 491, in incomplete_dir shutil.rmtree(tmp_dir) File "/julia/libs/anaconda3/envs/success/lib/python3.7/shutil.py", line 498, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/julia/libs/anaconda3/envs/success/lib/python3.7/shutil.py", line 496, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/julia/cache_home_2/datasets/csv/default-61468fc71a743ec1/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2.incomplete' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1773/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1772/comments
https://api.github.com/repos/huggingface/datasets/issues/1772/events
https://github.com/huggingface/datasets/issues/1772
792,703,797
MDU6SXNzdWU3OTI3MDM3OTc=
1,772
Adding SICK dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
"2021-01-24T02:15:31Z"
"2021-02-05T15:49:25Z"
"2021-02-05T15:49:25Z"
NONE
null
Hi It would be great to include SICK dataset. ## Adding a Dataset - **Name:** SICK - **Description:** a well known entailment dataset - **Paper:** http://marcobaroni.org/composes/sick.html - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** this is an important NLI benchmark Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1772/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1772/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1771/comments
https://api.github.com/repos/huggingface/datasets/issues/1771/events
https://github.com/huggingface/datasets/issues/1771
792,701,276
MDU6SXNzdWU3OTI3MDEyNzY=
1,771
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4", "events_url": "https://api.github.com/users/world2vec/events{/privacy}", "followers_url": "https://api.github.com/users/world2vec/followers", "following_url": "https://api.github.com/users/world2vec/following{/other_user}", "gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/world2vec", "id": 7607120, "login": "world2vec", "node_id": "MDQ6VXNlcjc2MDcxMjA=", "organizations_url": "https://api.github.com/users/world2vec/orgs", "received_events_url": "https://api.github.com/users/world2vec/received_events", "repos_url": "https://api.github.com/users/world2vec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/world2vec/subscriptions", "type": "User", "url": "https://api.github.com/users/world2vec" }
[]
closed
false
null
[]
null
[ "I temporary manually download csv.py as custom dataset loading script", "Indeed in 1.2.1 the script to process csv file is downloaded. Starting from the next release though we include the csv processing directly in the library.\r\nSee PR #1726 \r\nWe'll do a new release soon :)", "Thanks." ]
"2021-01-24T01:53:52Z"
"2021-01-24T23:06:29Z"
"2021-01-24T23:06:29Z"
NONE
null
Hi, When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset? ``` Traceback (most recent call last): File "/home/tom/pyenv/pystory/lib/python3.6/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/tom/pyenv/pystory/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 343, in cached_path max_retries=download_config.max_retries, File "/home/tom/pyenv/pystory/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py ```
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1771/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1771/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1770
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1770/comments
https://api.github.com/repos/huggingface/datasets/issues/1770/events
https://github.com/huggingface/datasets/issues/1770
792,698,148
MDU6SXNzdWU3OTI2OTgxNDg=
1,770
how can I combine 2 dataset with different/same features?
{ "avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4", "events_url": "https://api.github.com/users/world2vec/events{/privacy}", "followers_url": "https://api.github.com/users/world2vec/followers", "following_url": "https://api.github.com/users/world2vec/following{/other_user}", "gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/world2vec", "id": 7607120, "login": "world2vec", "node_id": "MDQ6VXNlcjc2MDcxMjA=", "organizations_url": "https://api.github.com/users/world2vec/orgs", "received_events_url": "https://api.github.com/users/world2vec/received_events", "repos_url": "https://api.github.com/users/world2vec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/world2vec/subscriptions", "type": "User", "url": "https://api.github.com/users/world2vec" }
[]
closed
false
null
[]
null
[ "Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issuecomment-727872188", "Good to hear.\r\nCurrently I did not use map , just fetch src and tgt from the 2 dataset and merge them.\r\nIt will be a release if you can deal with it at the backend.\r\nThanks.", "Hi! You can rename the columns and concatenate the datasets along `axis=1` to get the desired result as follows:\r\n```python\r\nds1 = ds1.rename_column(\"text\", \"src\")\r\nds2 = ds2.rename_column(\"text\", \"tgt\")\r\nds = datasets.concatenate_datasets([\"ds1\", \"ds2\"], axis=1)\r\n```" ]
"2021-01-24T01:26:06Z"
"2022-06-01T15:43:15Z"
"2022-06-01T15:43:15Z"
NONE
null
to combine 2 dataset by one-one map like ds = zip(ds1, ds2): ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'} or different feature: ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1770/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1769/comments
https://api.github.com/repos/huggingface/datasets/issues/1769/events
https://github.com/huggingface/datasets/issues/1769
792,523,284
MDU6SXNzdWU3OTI1MjMyODQ=
1,769
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
{ "avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4", "events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}", "followers_url": "https://api.github.com/users/shuaihuaiyi/followers", "following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}", "gists_url": "https://api.github.com/users/shuaihuaiyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shuaihuaiyi", "id": 14048129, "login": "shuaihuaiyi", "node_id": "MDQ6VXNlcjE0MDQ4MTI5", "organizations_url": "https://api.github.com/users/shuaihuaiyi/orgs", "received_events_url": "https://api.github.com/users/shuaihuaiyi/received_events", "repos_url": "https://api.github.com/users/shuaihuaiyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shuaihuaiyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shuaihuaiyi/subscriptions", "type": "User", "url": "https://api.github.com/users/shuaihuaiyi" }
[]
closed
false
null
[]
null
[ "More information: `run_mlm.py` will raise same error when `data_args.line_by_line==True`\r\n\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/examples/language-modeling/run_mlm.py#L300\r\n", "Hi ! What version of python and datasets do you have ? And also what version of dill and pickle ?", "> Hi ! What version of python and datasets do you have ? And also what version of dill and pickle ?\r\n\r\npython==3.6.10\r\ndatasets==1.2.1\r\ndill==0.3.2\r\npickle.format_version==4.0", "Multiprocessing in python require all the functions to be picklable. More specifically, functions need to be picklable with `dill`.\r\n\r\nHowever objects like `typing.Union[str, NoneType]` are not picklable in python <3.7.\r\nCan you try to update your python version to python>=3.7 ?\r\n" ]
"2021-01-23T10:13:00Z"
"2022-10-05T12:38:51Z"
"2022-10-05T12:38:51Z"
NONE
null
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine. The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py Script args: ``` --model_name_or_path ../../../model/chinese-roberta-wwm-ext --train_file /nfs/volume-377-2/bert/data/test/train.txt --output_dir test --do_train --per_device_train_batch_size 2 --gradient_accumulation_steps 2 --learning_rate 1e-4 --max_steps 1000 --warmup_steps 10 --save_steps 1000 --save_total_limit 1 --seed 23333 --max_seq_length 512 --preprocessing_num_workers 2 --cache_dir /nfs/volume-377-2/bert/data/test/cache ``` Where the `/nfs/volume-377-2/bert/data/test/train.txt` is just a toy example with 10000 lines of random string, you should be able to reproduce this error esaily. Full Traceback: ``` Traceback (most recent call last): File "/nfs/volume-377-2/bert/transformers/examples/language-modeling/run_mlm_wwm.py", line 398, in <module> main() File "/nfs/volume-377-2/bert/transformers/examples/language-modeling/run_mlm_wwm.py", line 325, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1318, in map transformed_shards = [r.get() for r in results] File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1318, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/pool.py", line 644, in get raise self._value File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/pool.py", line 424, in _handle_tasks put(task) File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump StockPickler.dump(self, obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 409, in dump self.save(obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function obj.__dict__, fkwdefaults), obj=obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 605, in save_reduce save(cls) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type obj.__bases__, _dict), obj=obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 507, in save self.save_global(obj, rv) File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 927, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1769/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1769/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1768/comments
https://api.github.com/repos/huggingface/datasets/issues/1768/events
https://github.com/huggingface/datasets/pull/1768
792,150,745
MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx
1,768
Mention kwargs in the Dataset Formatting docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[]
"2021-01-22T16:43:20Z"
"2021-01-31T12:33:10Z"
"2021-01-25T09:14:59Z"
CONTRIBUTOR
null
Hi, This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed. To prevent people from having to check the code/method docs, I just added a couple of lines in the docs. Please let me know your thoughts on this. Thanks, Gunjan @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1768/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1768.diff", "html_url": "https://github.com/huggingface/datasets/pull/1768", "merged_at": "2021-01-25T09:14:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1768.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1768" }
true
https://api.github.com/repos/huggingface/datasets/issues/1767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1767/comments
https://api.github.com/repos/huggingface/datasets/issues/1767/events
https://github.com/huggingface/datasets/pull/1767
792,068,497
MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2
1,767
Add Librispeech ASR
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[ "> Awesome thank you !\r\n> \r\n> The dummy data are quite big but it was expected given that the raw files are flac files.\r\n> Given that the script doesn't even read the flac files I think we can remove them. Or maybe use empty flac files (see [here](https://hydrogenaud.io/index.php?topic=118685.0) for example). What do you think ?\r\n> \r\n> We'll find a better solution to be able to have bigger dummy_data (max 1MB instead of a few KB, maybe using git LFS.\r\n\r\nHmm, I already made the dummy data as small as possible (a single flac filie per split only). I'd like to keep them at least to have complete dummy data and don't think 500KB for all datasets together is a problem (the long-range summarization datasets are similarly heavy). The moment we allow dummy data to be loaded directly for testing, we need the flac files IMO.\r\n\r\nBut I agree that longterm, we need a better solution for the dummy data (maybe stop hosting it on github to not make the repo too heavy)" ]
"2021-01-22T14:54:37Z"
"2021-01-25T20:38:07Z"
"2021-01-25T20:37:42Z"
MEMBER
null
This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360". As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` format, the speech files are not directly prepared to a float32-array, but instead just the path to the array file is stored.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1767/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1767.diff", "html_url": "https://github.com/huggingface/datasets/pull/1767", "merged_at": "2021-01-25T20:37:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/1767.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1767" }
true
https://api.github.com/repos/huggingface/datasets/issues/1766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1766/comments
https://api.github.com/repos/huggingface/datasets/issues/1766/events
https://github.com/huggingface/datasets/issues/1766
792,044,105
MDU6SXNzdWU3OTIwNDQxMDU=
1,766
Issues when run two programs compute the same metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4", "events_url": "https://api.github.com/users/lamthuy/events{/privacy}", "followers_url": "https://api.github.com/users/lamthuy/followers", "following_url": "https://api.github.com/users/lamthuy/following{/other_user}", "gists_url": "https://api.github.com/users/lamthuy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lamthuy", "id": 8089862, "login": "lamthuy", "node_id": "MDQ6VXNlcjgwODk4NjI=", "organizations_url": "https://api.github.com/users/lamthuy/orgs", "received_events_url": "https://api.github.com/users/lamthuy/received_events", "repos_url": "https://api.github.com/users/lamthuy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lamthuy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lamthuy/subscriptions", "type": "User", "url": "https://api.github.com/users/lamthuy" }
[]
closed
false
null
[]
null
[ "Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace \"default_experiment\" with the experiment id that you provide in the arrow filename. \r\n\r\nAlso when two `experiment_id` collide we're supposed to detect it using our locking mechanism. Not sure why it didn't work in your case. Could you share some code that reproduces the issue ? This would help us investigate.", "Thank you for your response. I fixed the issue by set \"keep_in_memory=True\" when load_metric. \r\nI cannot share the entire source code but below is the wrapper I wrote:\r\n\r\n```python\r\nclass Evaluation:\r\n def __init__(self, metric='sacrebleu'):\r\n # self.metric = load_metric(metric, keep_in_memory=True)\r\n self.metric = load_metric(metric)\r\n\r\n def add(self, predictions, references):\r\n self.metric.add_batch(predictions=predictions, references=references)\r\n\r\n def compute(self):\r\n return self.metric.compute()['score']\r\n```\r\n\r\nThen call the given wrapper as follows:\r\n\r\n```python\r\neval = Evaluation(metric='sacrebleu')\r\nfor query, candidates, labels in tqdm(dataset):\r\n predictions = net.generate(query)\r\n references = [[s] for s in labels]\r\n eval.add(predictions, references)\r\n if n % 100 == 0:\r\n bleu += eval.compute()\r\n eval = Evaluation(metric='sacrebleu')" ]
"2021-01-22T14:22:55Z"
"2021-02-02T10:38:06Z"
"2021-02-02T10:38:06Z"
NONE
null
I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches: ``` File "train_matching_min.py", line 160, in <module>ch_9_label avg_loss = valid(epoch, args.batch, args.validation, args.with_label) File "train_matching_min.py", line 93, in valid bleu += eval.compute() File "/u/tlhoang/projects/seal/match/models/eval.py", line 23, in compute return self.metric.compute()['score'] File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/metric.py", line 387, in compute self._finalize() File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/metric.py", line 355, in _finalize self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths])) File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 231, in read_files pa_table = self._read_files(files) File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 170, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict) File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 299, in _get_dataset_from_filename pa_table = f.read_all() File "pyarrow/ipc.pxi", line 481, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Expected to read 1819307375 metadata bytes, but only read 454396 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1766/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1765/comments
https://api.github.com/repos/huggingface/datasets/issues/1765/events
https://github.com/huggingface/datasets/issues/1765
791,553,065
MDU6SXNzdWU3OTE1NTMwNjU=
1,765
Error iterating over Dataset with DataLoader
{ "avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4", "events_url": "https://api.github.com/users/EvanZ/events{/privacy}", "followers_url": "https://api.github.com/users/EvanZ/followers", "following_url": "https://api.github.com/users/EvanZ/following{/other_user}", "gists_url": "https://api.github.com/users/EvanZ/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EvanZ", "id": 1295082, "login": "EvanZ", "node_id": "MDQ6VXNlcjEyOTUwODI=", "organizations_url": "https://api.github.com/users/EvanZ/orgs", "received_events_url": "https://api.github.com/users/EvanZ/received_events", "repos_url": "https://api.github.com/users/EvanZ/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EvanZ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EvanZ/subscriptions", "type": "User", "url": "https://api.github.com/users/EvanZ" }
[]
closed
false
null
[]
null
[ "Instead of:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n```\r\nIt should be:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n```\r\n\r\n`batch_sampler` accepts a Sampler object or an Iterable, so you get an error.", "@mariosasko I thought that would fix it, but now I'm getting a different error:\r\n\r\n```\r\n/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py:851: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n return torch.tensor(x, **format_kwargs)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-20-3af1d82bf93a> in <module>()\r\n 1 dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n----> 2 next(iter(dataloader))\r\n\r\n5 frames\r\n/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py in default_collate(batch)\r\n 53 storage = elem.storage()._new_shared(numel)\r\n 54 out = elem.new(storage)\r\n---> 55 return torch.stack(batch, 0, out=out)\r\n 56 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \\\r\n 57 and elem_type.__name__ != 'string_':\r\n\r\nRuntimeError: stack expects each tensor to be equal size, but got [7] at entry 0 and [10] at entry 1\r\n```\r\n\r\nAny thoughts what this means?I Do I need padding?", "Yes, padding is an answer. \r\n\r\nThis can be solved easily by passing a callable to the collate_fn arg of DataLoader that adds padding. ", "Padding was the fix, thanks!", "dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=4)\r\nbatch = next(iter(dataloader))\r\n\r\ngetting \r\nValueError: cannot reshape array of size 8192 into shape (1,512,4)\r\n\r\nI had put padding as 2048 for encoded_dataset\r\nkindly help", "data_loader_val = torch.utils.data.DataLoader(val_dataset, batch_size=32, shuffle=True, drop_last=False, num_workers=0)\r\ndataiter = iter(data_loader_val)\r\nimages, _ = next(dataiter)\r\n\r\ngetting -> TypeError: 'list' object is not callable\r\n\r\nCannot iterate through the data. Kindly suggest." ]
"2021-01-21T22:56:45Z"
"2022-10-28T02:16:38Z"
"2021-01-23T03:44:14Z"
NONE
null
I have a Dataset that I've mapped a tokenizer over: ``` encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids']) encoded_dataset[:1] ``` ``` {'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 21365, 4515, 8618, 1113, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` When I try to iterate as in the docs, I get errors: ``` dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32) next(iter(dataloader)) ``` ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-45-05180ba8aa35> in <module>() 1 dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32) ----> 2 next(iter(dataloader)) 3 frames /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __init__(self, loader) 411 self._timeout = loader.timeout 412 self._collate_fn = loader.collate_fn --> 413 self._sampler_iter = iter(self._index_sampler) 414 self._base_seed = torch.empty((), dtype=torch.int64).random_(generator=loader.generator).item() 415 self._persistent_workers = loader.persistent_workers TypeError: 'int' object is not iterable ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1765/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1764/comments
https://api.github.com/repos/huggingface/datasets/issues/1764/events
https://github.com/huggingface/datasets/issues/1764
791,486,860
MDU6SXNzdWU3OTE0ODY4NjA=
1,764
Connection Issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4", "events_url": "https://api.github.com/users/SaeedNajafi/events{/privacy}", "followers_url": "https://api.github.com/users/SaeedNajafi/followers", "following_url": "https://api.github.com/users/SaeedNajafi/following{/other_user}", "gists_url": "https://api.github.com/users/SaeedNajafi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaeedNajafi", "id": 12455298, "login": "SaeedNajafi", "node_id": "MDQ6VXNlcjEyNDU1Mjk4", "organizations_url": "https://api.github.com/users/SaeedNajafi/orgs", "received_events_url": "https://api.github.com/users/SaeedNajafi/received_events", "repos_url": "https://api.github.com/users/SaeedNajafi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaeedNajafi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaeedNajafi/subscriptions", "type": "User", "url": "https://api.github.com/users/SaeedNajafi" }
[]
closed
false
null
[]
null
[ "Academic WIFI was blocking." ]
"2021-01-21T20:56:09Z"
"2021-01-21T21:00:19Z"
"2021-01-21T21:00:02Z"
NONE
null
Today, I am getting connection issues while loading a dataset and the metric. ``` Traceback (most recent call last): File "src/train.py", line 180, in <module> train_dataset, dev_dataset, test_dataset = create_race_dataset() File "src/train.py", line 130, in create_race_dataset train_dataset = load_dataset("race", "all", split="train") File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path max_retries=download_config.max_retries, File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/race/race.py ``` Or ``` Traceback (most recent call last): File "src/train.py", line 105, in <module> rouge = datasets.load_metric("rouge") File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 500, in load_metric dataset=False, File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path max_retries=download_config.max_retries, File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/metrics/rouge/rouge.py ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1764/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1763/comments
https://api.github.com/repos/huggingface/datasets/issues/1763/events
https://github.com/huggingface/datasets/pull/1763
791,389,763
MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1
1,763
PAWS-X: Fix csv Dictreader splitting data on quotes
{ "avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4", "events_url": "https://api.github.com/users/gowtham1997/events{/privacy}", "followers_url": "https://api.github.com/users/gowtham1997/followers", "following_url": "https://api.github.com/users/gowtham1997/following{/other_user}", "gists_url": "https://api.github.com/users/gowtham1997/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gowtham1997", "id": 9641196, "login": "gowtham1997", "node_id": "MDQ6VXNlcjk2NDExOTY=", "organizations_url": "https://api.github.com/users/gowtham1997/orgs", "received_events_url": "https://api.github.com/users/gowtham1997/received_events", "repos_url": "https://api.github.com/users/gowtham1997/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gowtham1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gowtham1997/subscriptions", "type": "User", "url": "https://api.github.com/users/gowtham1997" }
[]
closed
false
null
[]
null
[]
"2021-01-21T18:21:01Z"
"2021-01-22T10:14:33Z"
"2021-01-22T10:13:45Z"
CONTRIBUTOR
null
```python from datasets import load_dataset # load english paws-x dataset datasets = load_dataset('paws-x', 'en') print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1] ``` changed `data = csv.DictReader(f, delimiter="\t")` to `data = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)` in the dataloader to make csv module not split by quotes. The results are as expected for all languages after the change.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1763/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1763.diff", "html_url": "https://github.com/huggingface/datasets/pull/1763", "merged_at": "2021-01-22T10:13:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1763.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1763" }
true
https://api.github.com/repos/huggingface/datasets/issues/1762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1762/comments
https://api.github.com/repos/huggingface/datasets/issues/1762/events
https://github.com/huggingface/datasets/issues/1762
791,226,007
MDU6SXNzdWU3OTEyMjYwMDc=
1,762
Unable to format dataset to CUDA Tensors
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Hi ! You can get CUDA tensors with\r\n\r\n```python\r\ndataset.set_format(\"torch\", columns=columns, device=\"cuda\")\r\n```\r\n\r\nIndeed `set_format` passes the `**kwargs` to `torch.tensor`", "Hi @lhoestq,\r\n\r\nThanks a lot. Is this true for all format types?\r\n\r\nAs in, for 'torch', I can have `**kwargs` to `torch.tensor` and for 'tf' those args are passed to `tf.Tensor`, and the same for 'numpy' and 'pandas'?", "Yes the keywords arguments are passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`.\r\nWe don't support the kwargs for pandas on the other hand.", "Thanks @lhoestq,\r\nWould it be okay if I added this to the docs and made a PR?", "Sure ! Feel free to open a PR to improve the documentation :) ", "Closing this issue as it has been resolved." ]
"2021-01-21T15:31:23Z"
"2021-02-02T07:13:22Z"
"2021-02-02T07:13:22Z"
CONTRIBUTOR
null
Hi, I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors. I tried this, but Dataset doesn't support assignment: ``` columns=['input_ids', 'token_type_ids', 'attention_mask', 'start_positions','end_positions'] samples.set_format(type='torch', columns = columns) for column in columns: samples[column].to(torch.device(self.config.device)) ``` There should be an option to do so, or if there is already a way to do this, please let me know. Thanks, Gunjan
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1762/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1761/comments
https://api.github.com/repos/huggingface/datasets/issues/1761/events
https://github.com/huggingface/datasets/pull/1761
791,150,858
MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw
1,761
Add SILICONE benchmark
{ "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eusip", "id": 1551356, "login": "eusip", "node_id": "MDQ6VXNlcjE1NTEzNTY=", "organizations_url": "https://api.github.com/users/eusip/orgs", "received_events_url": "https://api.github.com/users/eusip/received_events", "repos_url": "https://api.github.com/users/eusip/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "type": "User", "url": "https://api.github.com/users/eusip" }
[]
closed
false
null
[]
null
[ "Thanks for the feedback. All your comments have been addressed!", "Thank you for your constructive feedback! I now know how to best format future datasets that our team plans to publish in the near future :)", "Awesome ! Looking forward to it :) ", "Hi @lhoestq ! One last question. Our research team would like to distribute a link to this dataset amongst the spoken dialogue research community but the dataset does not show in the dropdown menu at huggingface.co. Is there anything else we must do in order to find the dataset there ?\r\n\r\nOnce the dataset does show in the dropdown menu, how can I affiliate it with the Telecom Paris organization that I already created at the website ?", "The files are not located in the right place in the repo. Let me move them", "I created a PR at https://github.com/huggingface/datasets/pull/1794", "I just merged the change @eusip, now the dataset page is available at the url:\r\nhttps://huggingface.co/datasets/silicone", "Thank you for moving the folder for me :)" ]
"2021-01-21T14:29:12Z"
"2021-02-04T14:32:48Z"
"2021-01-26T13:50:31Z"
CONTRIBUTOR
null
My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication. This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1761/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1761.diff", "html_url": "https://github.com/huggingface/datasets/pull/1761", "merged_at": "2021-01-26T13:50:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/1761.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1761" }
true
https://api.github.com/repos/huggingface/datasets/issues/1760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1760/comments
https://api.github.com/repos/huggingface/datasets/issues/1760/events
https://github.com/huggingface/datasets/pull/1760
791,110,857
MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0
1,760
More tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Conll has `multilingual` but is only tagged as `en`", "good catch, that was a bad copy paste x)" ]
"2021-01-21T13:50:10Z"
"2021-01-22T09:40:01Z"
"2021-01-22T09:40:00Z"
MEMBER
null
Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1760/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1760.diff", "html_url": "https://github.com/huggingface/datasets/pull/1760", "merged_at": "2021-01-22T09:40:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1760.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1760" }
true
https://api.github.com/repos/huggingface/datasets/issues/1759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1759/comments
https://api.github.com/repos/huggingface/datasets/issues/1759/events
https://github.com/huggingface/datasets/issues/1759
790,992,226
MDU6SXNzdWU3OTA5OTIyMjY=
1,759
wikipedia dataset incomplete
{ "avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4", "events_url": "https://api.github.com/users/ChrisDelClea/events{/privacy}", "followers_url": "https://api.github.com/users/ChrisDelClea/followers", "following_url": "https://api.github.com/users/ChrisDelClea/following{/other_user}", "gists_url": "https://api.github.com/users/ChrisDelClea/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChrisDelClea", "id": 19912393, "login": "ChrisDelClea", "node_id": "MDQ6VXNlcjE5OTEyMzkz", "organizations_url": "https://api.github.com/users/ChrisDelClea/orgs", "received_events_url": "https://api.github.com/users/ChrisDelClea/received_events", "repos_url": "https://api.github.com/users/ChrisDelClea/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChrisDelClea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChrisDelClea/subscriptions", "type": "User", "url": "https://api.github.com/users/ChrisDelClea" }
[]
closed
false
null
[]
null
[ "Hi !\r\nFrom what pickle file fo you get this ?\r\nI guess you mean the dataset loaded using `load_dataset` ?", "yes sorry, I used the `load_dataset`function and saved the data to a pickle file so I don't always have to reload it and are able to work offline. ", "The wikipedia articles are processed using the `mwparserfromhell` library. Even if it works well in most cases, such issues can happen unfortunately. You can find the repo here: https://github.com/earwig/mwparserfromhell\r\n\r\nThere also exist other datasets based on wikipedia that were processed differently (and are often cleaner) such as `wiki40b`.\r\n\r\n", "ok great. Thank you, @lhoestq. " ]
"2021-01-21T11:47:15Z"
"2021-01-21T17:22:11Z"
"2021-01-21T17:21:06Z"
NONE
null
Hey guys, I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset. Unfortunately, I found out that there is an incompleteness for the German dataset. For reasons unknown to me, the number of inhabitants has been removed from many pages: Thorey-sur-Ouche has 128 inhabitants according to the webpage (https://de.wikipedia.org/wiki/Thorey-sur-Ouche). The pickle file however shows: französische Gemeinde mit Einwohnern (Stand). Is it possible to fix this? Best regards Chris
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1759/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1759/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1758/comments
https://api.github.com/repos/huggingface/datasets/issues/1758/events
https://github.com/huggingface/datasets/issues/1758
790,626,116
MDU6SXNzdWU3OTA2MjYxMTY=
1,758
dataset.search() (elastic) cannot reliably retrieve search results
{ "avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4", "events_url": "https://api.github.com/users/afogarty85/events{/privacy}", "followers_url": "https://api.github.com/users/afogarty85/followers", "following_url": "https://api.github.com/users/afogarty85/following{/other_user}", "gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afogarty85", "id": 49048309, "login": "afogarty85", "node_id": "MDQ6VXNlcjQ5MDQ4MzA5", "organizations_url": "https://api.github.com/users/afogarty85/orgs", "received_events_url": "https://api.github.com/users/afogarty85/received_events", "repos_url": "https://api.github.com/users/afogarty85/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions", "type": "User", "url": "https://api.github.com/users/afogarty85" }
[]
closed
false
null
[]
null
[ "Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?", "Thanks for the feedback! I added a 30 second \"sleep\" and that seemed to work well!" ]
"2021-01-21T02:26:37Z"
"2021-01-22T00:25:50Z"
"2021-01-22T00:25:50Z"
NONE
null
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices. The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer. I am indexing data that looks like the following from the HF SQuAD 2.0 data set: ``` ['57318658e6313a140071d02b', '56f7165e3d8e2e1400e3733a', '570e2f6e0b85d914000d7d21', '5727e58aff5b5019007d97d0', '5a3b5a503ff257001ab8441f', '57262fab271a42140099d725'] ``` To reproduce the issue, try: ``` from datasets import load_dataset, load_metric from transformers import BertTokenizerFast, BertForQuestionAnswering from elasticsearch import Elasticsearch import numpy as np import collections from tqdm.auto import tqdm import torch # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') max_length = 384 # The maximum length of a feature (question and context) doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed. pad_on_right = tokenizer.padding_side == "right" squad_v2 = True # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- def prepare_validation_features(examples): # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # We keep the example_id that gave us this feature and we will store the offset mappings. tokenized_examples["example_id"] = [] for i in range(len(tokenized_examples["input_ids"])): # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token # position is part of the context or not. tokenized_examples["offset_mapping"][i] = [ (list(o) if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples # build base examples, features set of training data shuffled_idx = pd.read_csv('https://raw.githubusercontent.com/afogarty85/temp/main/idx.csv')['idx'].to_list() examples = load_dataset("squad_v2").shuffle(seed=1)['train'] features = load_dataset("squad_v2").shuffle(seed=1)['train'].map( prepare_validation_features, batched=True, remove_columns=['answers', 'context', 'id', 'question', 'title']) # reorder features by the training process features = features.select(indices=shuffled_idx) # get the example ids to match with the "example" data; get unique entries id_list = list(dict.fromkeys(features['example_id'])) # now search for their index positions in the examples data set; load elastic search es = Elasticsearch([{'host': 'localhost'}]).ping() # add an index to the id column for the examples examples.add_elasticsearch_index(column='id') # retrieve the example index example_idx_k1 = [examples.search(index_name='id', query=i, k=1).indices for i in id_list] example_idx_k1 = [item for sublist in example_idx_k1 for item in sublist] example_idx_k2 = [examples.search(index_name='id', query=i, k=3).indices for i in id_list] example_idx_k2 = [item for sublist in example_idx_k2 for item in sublist] len(example_idx_k1) # should be 130319 len(example_idx_k2) # should be 130319 #trial 1 lengths: # k=1: 130314 # k=3: 130319 # trial 2: # just run k=3 first: 130310 # try k=1 after k=3: 130319 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1758/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1757/comments
https://api.github.com/repos/huggingface/datasets/issues/1757/events
https://github.com/huggingface/datasets/issues/1757
790,466,509
MDU6SXNzdWU3OTA0NjY1MDk=
1,757
FewRel
{ "avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4", "events_url": "https://api.github.com/users/dspoka/events{/privacy}", "followers_url": "https://api.github.com/users/dspoka/followers", "following_url": "https://api.github.com/users/dspoka/following{/other_user}", "gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dspoka", "id": 6183050, "login": "dspoka", "node_id": "MDQ6VXNlcjYxODMwNTA=", "organizations_url": "https://api.github.com/users/dspoka/orgs", "received_events_url": "https://api.github.com/users/dspoka/received_events", "repos_url": "https://api.github.com/users/dspoka/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dspoka/subscriptions", "type": "User", "url": "https://api.github.com/users/dspoka" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "+1", "@dspoka Please check the following link : https://github.com/thunlp/FewRel\r\nThis link mentions two versions of the datasets. Also, this one seems to be the official link.\r\n\r\nI am assuming this is the correct link and implementing based on the same.", "Hi @lhoestq,\r\n\r\nThis issue can be closed, I guess.", "Yes :) closing\r\nThanks again for adding FewRel !", "Thanks for adding this @gchhablani ! Sorry didn't see the email notifications sooner!" ]
"2021-01-20T23:56:03Z"
"2021-03-09T02:52:05Z"
"2021-03-08T14:34:52Z"
NONE
null
## Adding a Dataset - **Name:** FewRel - **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset - **Paper:** @inproceedings{han2018fewrel, title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation}, author={Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong}, booktitle={EMNLP}, year={2018}} - **Data:** https://github.com/ProKil/FewRel - **Motivation:** relationship extraction dataset that's been used by some state of the art systems that should be incorporated. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1757/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1756/comments
https://api.github.com/repos/huggingface/datasets/issues/1756/events
https://github.com/huggingface/datasets/issues/1756
790,380,028
MDU6SXNzdWU3OTAzODAwMjg=
1,756
Ccaligned multilingual translation dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/flozi00", "id": 47894090, "login": "flozi00", "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "organizations_url": "https://api.github.com/users/flozi00/orgs", "received_events_url": "https://api.github.com/users/flozi00/received_events", "repos_url": "https://api.github.com/users/flozi00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "type": "User", "url": "https://api.github.com/users/flozi00" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
"2021-01-20T22:18:44Z"
"2021-03-01T10:36:21Z"
"2021-03-01T10:36:21Z"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). - **Paper:** *link to the dataset paper if available* - https://www.aclweb.org/anthology/2020.emnlp-main.480.pdf - **Data:** *link to the Github repository or current dataset location* - http://www.statmt.org/cc-aligned/ - **Motivation:** *what are some good reasons to have this dataset* - The authors says it's an high quality dataset. - it's pretty large and includes many language pairs. It could be interesting training mt5 on this task. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1756/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1755/comments
https://api.github.com/repos/huggingface/datasets/issues/1755/events
https://github.com/huggingface/datasets/issues/1755
790,324,734
MDU6SXNzdWU3OTAzMjQ3MzQ=
1,755
Using select/reordering datasets slows operations down immensely
{ "avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4", "events_url": "https://api.github.com/users/afogarty85/events{/privacy}", "followers_url": "https://api.github.com/users/afogarty85/followers", "following_url": "https://api.github.com/users/afogarty85/following{/other_user}", "gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afogarty85", "id": 49048309, "login": "afogarty85", "node_id": "MDQ6VXNlcjQ5MDQ4MzA5", "organizations_url": "https://api.github.com/users/afogarty85/orgs", "received_events_url": "https://api.github.com/users/afogarty85/received_events", "repos_url": "https://api.github.com/users/afogarty85/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions", "type": "User", "url": "https://api.github.com/users/afogarty85" }
[]
closed
false
null
[]
null
[ "You can use `Dataset.flatten_indices()` to make it fast after a select or shuffle.", "Thanks for the input! I gave that a try by adding this after my selection / reordering operations, but before the big computation task of `score_squad`\r\n\r\n```\r\nexamples = examples.flatten_indices()\r\nfeatures = features.flatten_indices()\r\n```\r\n\r\nThat helped quite a bit!" ]
"2021-01-20T21:12:12Z"
"2021-01-20T22:03:39Z"
"2021-01-20T22:03:39Z"
NONE
null
I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour. The below example should be reproducible and I have ran myself down this path because I want to use HF's scoring functions and helpful data preparation, but use my own trainer. The training process uses shuffle and therefore the order I trained on no longer matches the original data set order. So, to score my results correctly, the original data set needs to match the order of the training. This requires that I: (1) collect the index for each row of data emitted during training, and (2) use this index information to re-order the datasets correctly so the orders match when I go to score. The problem is, the dataset class starts performing very poorly as soon as you start manipulating its order by immense magnitudes. ``` from datasets import load_dataset, load_metric from transformers import BertTokenizerFast, BertForQuestionAnswering from elasticsearch import Elasticsearch import numpy as np import collections from tqdm.auto import tqdm import torch # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') max_length = 384 # The maximum length of a feature (question and context) doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed. pad_on_right = tokenizer.padding_side == "right" squad_v2 = True # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- def prepare_validation_features(examples): # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # We keep the example_id that gave us this feature and we will store the offset mappings. tokenized_examples["example_id"] = [] for i in range(len(tokenized_examples["input_ids"])): # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token # position is part of the context or not. tokenized_examples["offset_mapping"][i] = [ (list(o) if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- def postprocess_qa_predictions(examples, features, starting_logits, ending_logits, n_best_size = 20, max_answer_length = 30): all_start_logits, all_end_logits = starting_logits, ending_logits # Build a map example to its corresponding features. example_id_to_index = {k: i for i, k in enumerate(examples["id"])} features_per_example = collections.defaultdict(list) for i, feature in enumerate(features): features_per_example[example_id_to_index[feature["example_id"]]].append(i) # The dictionaries we have to fill. predictions = collections.OrderedDict() # Logging. print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.") # Let's loop over all the examples! for example_index, example in enumerate(tqdm(examples)): # Those are the indices of the features associated to the current example. feature_indices = features_per_example[example_index] min_null_score = None # Only used if squad_v2 is True. valid_answers = [] context = example["context"] # Looping through all the features associated to the current example. for feature_index in feature_indices: # We grab the predictions of the model for this feature. start_logits = all_start_logits[feature_index] end_logits = all_end_logits[feature_index] # This is what will allow us to map some the positions in our logits to span of texts in the original # context. offset_mapping = features[feature_index]["offset_mapping"] # Update minimum null prediction. cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id) feature_null_score = start_logits[cls_index] + end_logits[cls_index] if min_null_score is None or min_null_score < feature_null_score: min_null_score = feature_null_score # Go through all possibilities for the `n_best_size` greater start and end logits. start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist() end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist() for start_index in start_indexes: for end_index in end_indexes: # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond # to part of the input_ids that are not in the context. if ( start_index >= len(offset_mapping) or end_index >= len(offset_mapping) or offset_mapping[start_index] is None or offset_mapping[end_index] is None ): continue # Don't consider answers with a length that is either < 0 or > max_answer_length. if end_index < start_index or end_index - start_index + 1 > max_answer_length: continue start_char = offset_mapping[start_index][0] end_char = offset_mapping[end_index][1] valid_answers.append( { "score": start_logits[start_index] + end_logits[end_index], "text": context[start_char: end_char] } ) if len(valid_answers) > 0: best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0] else: # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid # failure. best_answer = {"text": "", "score": 0.0} # Let's pick our final answer: the best one or the null answer (only for squad_v2) if not squad_v2: predictions[example["id"]] = best_answer["text"] else: answer = best_answer["text"] if best_answer["score"] > min_null_score else "" predictions[example["id"]] = answer return predictions # build base examples, features from training data examples = load_dataset("squad_v2").shuffle(seed=5)['train'] features = load_dataset("squad_v2").shuffle(seed=5)['train'].map( prepare_validation_features, batched=True, remove_columns=['answers', 'context', 'id', 'question', 'title']) # sim some shuffled training indices that we want to use to re-order the data to compare how we did shuffle_idx = np.arange(0, 131754) np.random.shuffle(shuffle_idx) # create a new dataset with rows selected following the training shuffle features = features.select(indices=shuffle_idx) # get unique example ids to match with the "example" data id_list = list(dict.fromkeys(features['example_id'])) # now search for their index positions; load elastic search es = Elasticsearch([{'host': 'localhost'}]).ping() # add an index to the id column for the examples examples.add_elasticsearch_index(column='id') # search the examples for their index position example_idx = [examples.search(index_name='id', query=i, k=1).indices for i in id_list] # drop the elastic search examples.drop_index(index_name='id') # put examples in the right order examples = examples.select(indices=example_idx) # generate some fake data logits = {'starting_logits': torch.randn(131754, 384), 'ending_logits': torch.randn(131754, 384)} def score_squad(logits, n_best_size, max_answer): # proceed with QA calculation final_predictions = postprocess_qa_predictions(examples=examples, features=features, starting_logits=logits['starting_logits'], ending_logits=logits['ending_logits'], n_best_size=20, max_answer_length=30) metric = load_metric("squad_v2") formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()] references = [{"id": ex["id"], "answers": ex["answers"]} for ex in examples] metrics = metric.compute(predictions=formatted_predictions, references=references) return metrics metrics = score_squad(logits, n_best_size=20, max_answer=30) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1755/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1754/comments
https://api.github.com/repos/huggingface/datasets/issues/1754/events
https://github.com/huggingface/datasets/pull/1754
789,881,730
MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw
1,754
Use a config id in the cache directory names for custom configs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-20T11:11:00Z"
"2021-01-25T09:12:07Z"
"2021-01-25T09:12:06Z"
MEMBER
null
As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config. For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes: ```python from datasets import load_dataset mnli = load_dataset("glue", "mnli") mnli_custom = load_dataset("glue", "mnli", label_classes=["contradiction", "entailment", "neutral"]) ``` I fixed that by extending the cache directory definition of a dataset that is being generated. Instead of using the config name in the cache directory name, I switched to using a `config_id`. By default it is equal to the config name. However the name of a config is not sufficent to have a unique identifier for the dataset being generated since it doesn't take into account: - the config kwargs that can be used to overwrite attributes - the custom features used to write the dataset - the data_files for json/text/csv/pandas datasets Therefore the config id is just the config name with an optional suffix based on these. In particular taking into account the config kwargs fixes the issue with the `label_classes` above. I completed the current test cases by adding the case that was missing: overwriting an already existing config.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1754/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1754.diff", "html_url": "https://github.com/huggingface/datasets/pull/1754", "merged_at": "2021-01-25T09:12:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1754" }
true
https://api.github.com/repos/huggingface/datasets/issues/1753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1753/comments
https://api.github.com/repos/huggingface/datasets/issues/1753/events
https://github.com/huggingface/datasets/pull/1753
789,867,685
MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx
1,753
fix comet citations
{ "avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4", "events_url": "https://api.github.com/users/ricardorei/events{/privacy}", "followers_url": "https://api.github.com/users/ricardorei/followers", "following_url": "https://api.github.com/users/ricardorei/following{/other_user}", "gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ricardorei", "id": 17256847, "login": "ricardorei", "node_id": "MDQ6VXNlcjE3MjU2ODQ3", "organizations_url": "https://api.github.com/users/ricardorei/orgs", "received_events_url": "https://api.github.com/users/ricardorei/received_events", "repos_url": "https://api.github.com/users/ricardorei/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions", "type": "User", "url": "https://api.github.com/users/ricardorei" }
[]
closed
false
null
[]
null
[]
"2021-01-20T10:52:38Z"
"2021-01-20T14:39:30Z"
"2021-01-20T14:39:30Z"
CONTRIBUTOR
null
I realized COMET citations were not showing in the hugging face metrics page: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png"> This pull request is intended to fix that. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1753/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1753.diff", "html_url": "https://github.com/huggingface/datasets/pull/1753", "merged_at": "2021-01-20T14:39:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1753.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1753" }
true
https://api.github.com/repos/huggingface/datasets/issues/1752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1752/comments
https://api.github.com/repos/huggingface/datasets/issues/1752/events
https://github.com/huggingface/datasets/pull/1752
789,822,459
MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5
1,752
COMET metric citation
{ "avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4", "events_url": "https://api.github.com/users/ricardorei/events{/privacy}", "followers_url": "https://api.github.com/users/ricardorei/followers", "following_url": "https://api.github.com/users/ricardorei/following{/other_user}", "gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ricardorei", "id": 17256847, "login": "ricardorei", "node_id": "MDQ6VXNlcjE3MjU2ODQ3", "organizations_url": "https://api.github.com/users/ricardorei/orgs", "received_events_url": "https://api.github.com/users/ricardorei/received_events", "repos_url": "https://api.github.com/users/ricardorei/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions", "type": "User", "url": "https://api.github.com/users/ricardorei" }
[]
closed
false
null
[]
null
[ "I think its better to create a new branch with this fix. I forgot I was still using the old branch." ]
"2021-01-20T09:54:43Z"
"2021-01-20T10:27:07Z"
"2021-01-20T10:25:02Z"
CONTRIBUTOR
null
In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c85fdac2938.png"> This pull request is only intended to fix that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1752/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1752.diff", "html_url": "https://github.com/huggingface/datasets/pull/1752", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1752.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1752" }
true
https://api.github.com/repos/huggingface/datasets/issues/1751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1751/comments
https://api.github.com/repos/huggingface/datasets/issues/1751/events
https://github.com/huggingface/datasets/pull/1751
789,232,980
MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2
1,751
Updated README for the Social Bias Frames dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
[]
closed
false
null
[]
null
[]
"2021-01-19T17:53:00Z"
"2021-01-20T14:56:52Z"
"2021-01-20T14:56:52Z"
CONTRIBUTOR
null
See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1751/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1751.diff", "html_url": "https://github.com/huggingface/datasets/pull/1751", "merged_at": "2021-01-20T14:56:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/1751.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1751" }
true
https://api.github.com/repos/huggingface/datasets/issues/1750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1750/comments
https://api.github.com/repos/huggingface/datasets/issues/1750/events
https://github.com/huggingface/datasets/pull/1750
788,668,085
MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1
1,750
Fix typo in README.md of cnn_dailymail
{ "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/forest1988", "id": 2755894, "login": "forest1988", "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "organizations_url": "https://api.github.com/users/forest1988/orgs", "received_events_url": "https://api.github.com/users/forest1988/received_events", "repos_url": "https://api.github.com/users/forest1988/repos", "site_admin": false, "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "type": "User", "url": "https://api.github.com/users/forest1988" }
[]
closed
false
null
[]
null
[ "Good catch, thanks!", "Thank you for merging!" ]
"2021-01-19T03:06:05Z"
"2021-01-19T11:07:29Z"
"2021-01-19T09:48:43Z"
CONTRIBUTOR
null
When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`. I am afraid this is a trivial matter, but I would like to make a suggestion for revision.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1750/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1750.diff", "html_url": "https://github.com/huggingface/datasets/pull/1750", "merged_at": "2021-01-19T09:48:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/1750.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1750" }
true
https://api.github.com/repos/huggingface/datasets/issues/1749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1749/comments
https://api.github.com/repos/huggingface/datasets/issues/1749/events
https://github.com/huggingface/datasets/pull/1749
788,476,639
MDExOlB1bGxSZXF1ZXN0NTU2OTgxMDc5
1,749
Added metadata and correct splits for swda.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "events_url": "https://api.github.com/users/gmihaila/events{/privacy}", "followers_url": "https://api.github.com/users/gmihaila/followers", "following_url": "https://api.github.com/users/gmihaila/following{/other_user}", "gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gmihaila", "id": 22454783, "login": "gmihaila", "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "organizations_url": "https://api.github.com/users/gmihaila/orgs", "received_events_url": "https://api.github.com/users/gmihaila/received_events", "repos_url": "https://api.github.com/users/gmihaila/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions", "type": "User", "url": "https://api.github.com/users/gmihaila" }
[]
closed
false
null
[]
null
[ "I will push updates tomorrow.", "@lhoestq thank you for your comments! I went ahead and fixed the code 😃. Please let me know if I missed anything." ]
"2021-01-18T18:36:32Z"
"2021-01-29T19:35:52Z"
"2021-01-29T18:38:08Z"
CONTRIBUTOR
null
Switchboard Dialog Act Corpus I made some changes following @bhavitvyamalik recommendation in #1678: * Contains all metadata. * Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo. * Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jurafsky/ws97) and validation split used in [Probabilistic-RNN-DA-Classifier](https://github.com/NathanDuran/Probabilistic-RNN-DA-Classifier).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1749/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1749.diff", "html_url": "https://github.com/huggingface/datasets/pull/1749", "merged_at": "2021-01-29T18:38:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/1749.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1749" }
true
https://api.github.com/repos/huggingface/datasets/issues/1748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1748/comments
https://api.github.com/repos/huggingface/datasets/issues/1748/events
https://github.com/huggingface/datasets/pull/1748
788,431,642
MDExOlB1bGxSZXF1ZXN0NTU2OTQ0NDEx
1,748
add Stuctured Argument Extraction for Korean dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
[]
"2021-01-18T17:14:19Z"
"2021-09-17T16:53:18Z"
"2021-01-19T11:26:58Z"
MEMBER
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1748/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1748.diff", "html_url": "https://github.com/huggingface/datasets/pull/1748", "merged_at": "2021-01-19T11:26:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1748" }
true
https://api.github.com/repos/huggingface/datasets/issues/1747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1747/comments
https://api.github.com/repos/huggingface/datasets/issues/1747/events
https://github.com/huggingface/datasets/issues/1747
788,299,775
MDU6SXNzdWU3ODgyOTk3NzU=
1,747
datasets slicing with seed
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost" }
[]
closed
false
null
[]
null
[ "Hi :) \r\nThe slicing API from https://huggingface.co/docs/datasets/splits.html doesn't shuffle the data.\r\nYou can shuffle and then take a subset of your dataset with\r\n```python\r\n# shuffle and take the first 100 examples\r\ndataset = dataset.shuffle(seed=42).select(range(100))\r\n```\r\n\r\nYou can find more information about shuffling and selecting rows in the documentation: https://huggingface.co/docs/datasets/processing.html#selecting-sorting-shuffling-splitting-rows", "thank you so much\n\nOn Mon, Jan 18, 2021 at 3:17 PM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hi :)\n> The slicing API doesn't shuffle the data.\n> You can shuffle and then take a subset of your dataset with\n>\n> # shuffle and take the first 100 examplesdataset = dataset.shuffle(seed=42).select(range(100))\n>\n> You can find more information about shuffling and selecting rows in the\n> documentation:\n> https://huggingface.co/docs/datasets/processing.html#selecting-sorting-shuffling-splitting-rows\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/1747#issuecomment-762278134>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM5D5MDPLJGI4IG3UADS2Q7GPANCNFSM4WHLOZJQ>\n> .\n>\n" ]
"2021-01-18T14:08:55Z"
"2022-10-05T12:37:27Z"
"2022-10-05T12:37:27Z"
NONE
null
Hi I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html I could not find a seed option, could you assist me please how I can get a slice for different seeds? thank you. @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1747/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1746/comments
https://api.github.com/repos/huggingface/datasets/issues/1746/events
https://github.com/huggingface/datasets/pull/1746
788,188,184
MDExOlB1bGxSZXF1ZXN0NTU2NzQxMjIw
1,746
Fix release conda worflow
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-18T11:29:10Z"
"2021-01-18T11:31:24Z"
"2021-01-18T11:31:23Z"
MEMBER
null
The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1746/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1746.diff", "html_url": "https://github.com/huggingface/datasets/pull/1746", "merged_at": "2021-01-18T11:31:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/1746.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1746" }
true
https://api.github.com/repos/huggingface/datasets/issues/1745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1745/comments
https://api.github.com/repos/huggingface/datasets/issues/1745/events
https://github.com/huggingface/datasets/issues/1745
787,838,256
MDU6SXNzdWU3ODc4MzgyNTY=
1,745
difference between wsc and wsc.fixed for superglue
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost" }
[]
closed
false
null
[]
null
[ "From the description given in the dataset script for `wsc.fixed`:\r\n```\r\nThis version fixes issues where the spans are not actually substrings of the text.\r\n```" ]
"2021-01-18T00:50:19Z"
"2021-01-18T11:02:43Z"
"2021-01-18T00:59:34Z"
NONE
null
Hi I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1745/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1744/comments
https://api.github.com/repos/huggingface/datasets/issues/1744/events
https://github.com/huggingface/datasets/pull/1744
787,649,811
MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4
1,744
Add missing "brief" entries to reuters
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[ "@lhoestq I ran `make style` but CI code quality still failing and I don't have access to logs", "It's also likely that due to the previous placement of the field initialization, much of the data about topics etc was simply wrong and carried over from previous entries. Model scores seem to improve significantly with this PR." ]
"2021-01-17T07:58:49Z"
"2021-01-18T11:26:09Z"
"2021-01-18T11:26:09Z"
CONTRIBUTOR
null
This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1744/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1744.diff", "html_url": "https://github.com/huggingface/datasets/pull/1744", "merged_at": "2021-01-18T11:26:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1744.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1744" }
true
https://api.github.com/repos/huggingface/datasets/issues/1743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1743/comments
https://api.github.com/repos/huggingface/datasets/issues/1743/events
https://github.com/huggingface/datasets/issues/1743
787,631,412
MDU6SXNzdWU3ODc2MzE0MTI=
1,743
Issue while Creating Custom Metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Currently it's only possible to define the features for the two columns `references` and `predictions`.\r\nThe data for these columns can then be passed to `metric.add_batch` and `metric.compute`.\r\nInstead of defining more columns `text`, `offset_mapping` and `ground` you must include them in either references and predictions.\r\n\r\nFor example \r\n```python\r\nfeatures = datasets.Features({\r\n 'predictions':datasets.Sequence(datasets.Value(\"int32\")),\r\n \"references\": datasets.Sequence({\r\n \"references_ids\": datasets.Value(\"int32\"),\r\n \"offset_mapping\": datasets.Value(\"int32\"),\r\n 'text': datasets.Value('string'),\r\n \"ground\": datasets.Value(\"int32\")\r\n }),\r\n})\r\n```\r\n\r\nAnother option would be to simply have the two features like \r\n```python\r\nfeatures = datasets.Features({\r\n 'predictions':datasets.Sequence(datasets.Value(\"int32\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"int32\")),\r\n})\r\n```\r\nand keep `offset_mapping`, `text` and `ground` as as parameters for the computation (i.e. kwargs when calling `metric.compute`).\r\n\r\n\r\nWhat is the metric you would like to implement ?\r\n\r\nI'm asking since we consider allowing additional fields as requested in the `Comet` metric (see PR and discussion [here](https://github.com/huggingface/datasets/pull/1577)) and I'd like to know if it's something that can be interesting for users.\r\n\r\nWhat do you think ?", "Hi @lhoestq,\r\n\r\nI am doing text segmentation and the metric is effectively dice score on character offsets. So I need to pass the actual spans and I want to be able to get the spans based on predictions using offset_mapping.\r\n\r\nIncluding them in references seems like a good idea. I'll try it out and get back to you. If there's a better way to write a metric function for the same, please let me know.", "Resolved via https://github.com/huggingface/datasets/pull/3824." ]
"2021-01-17T07:01:14Z"
"2022-06-01T15:49:34Z"
"2022-06-01T15:49:34Z"
CONTRIBUTOR
null
Hi Team, I am trying to create a custom metric for my training as follows, where f1 is my own metric: ```python def _info(self): # TODO: Specifies the datasets.MetricInfo object return datasets.MetricInfo( # This is the description that will appear on the metrics page. description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, # This defines the format of each prediction and reference features = datasets.Features({'predictions':datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")),"offset_mapping":datasets.Sequence(datasets.Value("int32")),'text':datasets.Sequence(datasets.Value('string')),"ground":datasets.Sequence(datasets.Value("int32")),}), # Homepage of the metric for documentation homepage="http://metric.homepage", # Additional links to the codebase or references codebase_urls=["http://github.com/path/to/codebase/of/new_metric"], reference_urls=["http://path.to.reference.url/new_metric"] ) def _compute(self,predictions,references,text,offset_mapping,spans): pred_spans = [] for i,preds in enumerate(predictions): current_preds = [] for j,token_preds in enumerate(preds): if (preds>0.5): current_preds+=list(range(offset_mapping[i][j][0],offset_mapping[i][j][1])) pred_spans.append(current_spans) return { "Token Wise F1": f1_score(references,predictions,labels=[0,1]), "Offset Wise F1": np.mean([f1(preds,gold) for preds,fold in zip(pred_spans,ground)]) } ``` I believe this is not correct. But that's not the issue I am facing right now. I get this error : ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-144-ed7349b50821> in <module>() ----> 1 new_metric.compute(predictions=inputs["labels"],references=inputs["labels"], text=inputs["text"], offset_mapping=inputs["offset_mapping"],ground=inputs["ground"] ) 2 frames /usr/local/lib/python3.6/dist-packages/datasets/features.py in encode_batch(self, batch) 802 encoded_batch = {} 803 if set(batch) != set(self): --> 804 print(batch) 805 print(self) 806 raise ValueError("Column mismatch between batch {} and features {}".format(set(batch), set(self))) ValueError: Column mismatch between batch {'references', 'predictions'} and features {'ground', 'predictions', 'offset_mapping', 'text', 'references'} ``` On checking the features.py file, I see the call is made from add_batch() in metrics.py which only takes in predictions and references. How do I make my custom metric work? Will it work with a trainer even if I am able to make this metric work? Thanks, Gunjan
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1743/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1742/comments
https://api.github.com/repos/huggingface/datasets/issues/1742/events
https://github.com/huggingface/datasets/pull/1742
787,623,640
MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw
1,742
Add GLUE Compat (compatible with transformers<3.5.0)
{ "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JetRunner", "id": 22514219, "login": "JetRunner", "node_id": "MDQ6VXNlcjIyNTE0MjE5", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "repos_url": "https://api.github.com/users/JetRunner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "type": "User", "url": "https://api.github.com/users/JetRunner" }
[]
closed
false
null
[]
null
[ "Maybe it would be simpler to just overwrite the order of the label classes of the `glue` dataset ?\r\n```python\r\nmnli = load_dataset(\"glue\", \"mnli\", label_classes=[\"contradiction\", \"entailment\", \"neutral\"])\r\n```", "Sounds good. Will close the issue if that works." ]
"2021-01-17T05:54:25Z"
"2021-03-29T12:43:30Z"
"2021-03-29T12:43:30Z"
CONTRIBUTOR
null
Link to our discussion on Slack (HF internal) https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400 The next step is to add a compatible option in the new `run_glue.py` I duplicated `glue` and made the following changes: 1. Change the name to `glue_compat`. 2. Change the label assignments for MNLI and AX.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1742/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1742.diff", "html_url": "https://github.com/huggingface/datasets/pull/1742", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1742.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1742" }
true
https://api.github.com/repos/huggingface/datasets/issues/1741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1741/comments
https://api.github.com/repos/huggingface/datasets/issues/1741/events
https://github.com/huggingface/datasets/issues/1741
787,327,060
MDU6SXNzdWU3ODczMjcwNjA=
1,741
error when run fine_tuning on text_classification
{ "avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4", "events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}", "followers_url": "https://api.github.com/users/XiaoYang66/followers", "following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}", "gists_url": "https://api.github.com/users/XiaoYang66/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/XiaoYang66", "id": 43234824, "login": "XiaoYang66", "node_id": "MDQ6VXNlcjQzMjM0ODI0", "organizations_url": "https://api.github.com/users/XiaoYang66/orgs", "received_events_url": "https://api.github.com/users/XiaoYang66/received_events", "repos_url": "https://api.github.com/users/XiaoYang66/repos", "site_admin": false, "starred_url": "https://api.github.com/users/XiaoYang66/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XiaoYang66/subscriptions", "type": "User", "url": "https://api.github.com/users/XiaoYang66" }
[]
closed
false
null
[]
null
[ "none" ]
"2021-01-16T02:23:19Z"
"2021-01-16T02:39:28Z"
"2021-01-16T02:39:18Z"
NONE
null
dataset:sem_eval_2014_task_1 pretrained_model:bert-base-uncased error description: when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW). the error is like this : `File "train.py", line 69, in <module> trainer.train() File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train for step, inputs in enumerate(epoch_iterator): File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] KeyError: 2` this is my code : ```dataset_name = 'sem_eval_2014_task_1' num_labels_size = 3 batch_size = 4 model_checkpoint = 'bert-base-uncased' number_train_epoch = 5 def tokenize(batch): return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, ) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size) tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True) train_dataset = load_dataset(dataset_name, split='train') test_dataset = load_dataset(dataset_name, split='test') train_encoded_dataset = train_dataset.map(tokenize, batched=True) test_encoded_dataset = test_dataset.map(tokenize, batched=True) args = TrainingArguments( output_dir='./results', evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=number_train_epoch, weight_decay=0.01, do_predict=True, ) trainer = Trainer( model=model, args=args, compute_metrics=compute_metrics, train_dataset=train_encoded_dataset, eval_dataset=test_encoded_dataset, tokenizer=tokenizer ) trainer.train() trainer.evaluate()
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1741/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1740/comments
https://api.github.com/repos/huggingface/datasets/issues/1740/events
https://github.com/huggingface/datasets/pull/1740
787,264,605
MDExOlB1bGxSZXF1ZXN0NTU2MDA5NjM1
1,740
add id_liputan6 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
[]
closed
false
null
[]
null
[]
"2021-01-15T22:58:34Z"
"2021-01-20T13:41:26Z"
"2021-01-20T13:41:26Z"
CONTRIBUTOR
null
id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1740/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1740.diff", "html_url": "https://github.com/huggingface/datasets/pull/1740", "merged_at": "2021-01-20T13:41:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1740" }
true
https://api.github.com/repos/huggingface/datasets/issues/1739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1739/comments
https://api.github.com/repos/huggingface/datasets/issues/1739/events
https://github.com/huggingface/datasets/pull/1739
787,219,138
MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx
1,739
fixes and improvements for the WebNLG loader
{ "avatar_url": "https://avatars.githubusercontent.com/u/9607332?v=4", "events_url": "https://api.github.com/users/Shimorina/events{/privacy}", "followers_url": "https://api.github.com/users/Shimorina/followers", "following_url": "https://api.github.com/users/Shimorina/following{/other_user}", "gists_url": "https://api.github.com/users/Shimorina/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shimorina", "id": 9607332, "login": "Shimorina", "node_id": "MDQ6VXNlcjk2MDczMzI=", "organizations_url": "https://api.github.com/users/Shimorina/orgs", "received_events_url": "https://api.github.com/users/Shimorina/received_events", "repos_url": "https://api.github.com/users/Shimorina/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shimorina/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shimorina/subscriptions", "type": "User", "url": "https://api.github.com/users/Shimorina" }
[]
closed
false
null
[]
null
[ "The dataset card is fantastic!\r\n\r\nLooks good to me! Did you check that this still passes the slow tests with the existing dummy data?", "Yes, I ran and passed all the tests specified in [this guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata), including the slow ones.", "I just added the `from pathlib import Path` at the top to fix the script", "I ran the tests locally and they all pass, merging", "Thank you for the review!" ]
"2021-01-15T21:45:23Z"
"2021-01-29T14:34:06Z"
"2021-01-29T10:53:03Z"
CONTRIBUTOR
null
- fixes test sets loading in v3.0 - adds additional fields for v3.0_ru - adds info to the WebNLG data card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1739/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1739/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1739.diff", "html_url": "https://github.com/huggingface/datasets/pull/1739", "merged_at": "2021-01-29T10:53:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/1739.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1739" }
true
https://api.github.com/repos/huggingface/datasets/issues/1738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1738/comments
https://api.github.com/repos/huggingface/datasets/issues/1738/events
https://github.com/huggingface/datasets/pull/1738
786,068,440
MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4
1,738
Conda support
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[]
closed
false
null
[]
null
[ "Nice thanks :) \r\nNote that in `datasets` the tags are simply the version without the `v`. For example `1.2.1`.", "Do you push tags only for versions?", "Yes I've always used tags only for versions" ]
"2021-01-14T15:11:25Z"
"2021-01-15T10:08:20Z"
"2021-01-15T10:08:19Z"
MEMBER
null
Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`). Will appear here: https://anaconda.org/huggingface/datasets Depends on `conda-forge` for now, so the following is required for installation: ``` conda install -c huggingface -c conda-forge datasets ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 4, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1738/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1738.diff", "html_url": "https://github.com/huggingface/datasets/pull/1738", "merged_at": "2021-01-15T10:08:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1738.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1738" }
true
https://api.github.com/repos/huggingface/datasets/issues/1737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1737/comments
https://api.github.com/repos/huggingface/datasets/issues/1737/events
https://github.com/huggingface/datasets/pull/1737
785,606,286
MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5
1,737
update link in TLC to be github links
{ "avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4", "events_url": "https://api.github.com/users/chameleonTK/events{/privacy}", "followers_url": "https://api.github.com/users/chameleonTK/followers", "following_url": "https://api.github.com/users/chameleonTK/following{/other_user}", "gists_url": "https://api.github.com/users/chameleonTK/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chameleonTK", "id": 6429850, "login": "chameleonTK", "node_id": "MDQ6VXNlcjY0Mjk4NTA=", "organizations_url": "https://api.github.com/users/chameleonTK/orgs", "received_events_url": "https://api.github.com/users/chameleonTK/received_events", "repos_url": "https://api.github.com/users/chameleonTK/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chameleonTK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chameleonTK/subscriptions", "type": "User", "url": "https://api.github.com/users/chameleonTK" }
[]
closed
false
null
[]
null
[ "Thanks for updating this!" ]
"2021-01-14T02:49:21Z"
"2021-01-14T10:25:24Z"
"2021-01-14T10:25:24Z"
CONTRIBUTOR
null
Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1737/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1737/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1737.diff", "html_url": "https://github.com/huggingface/datasets/pull/1737", "merged_at": "2021-01-14T10:25:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1737.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1737" }
true
https://api.github.com/repos/huggingface/datasets/issues/1736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1736/comments
https://api.github.com/repos/huggingface/datasets/issues/1736/events
https://github.com/huggingface/datasets/pull/1736
785,433,854
MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw
1,736
Adjust BrWaC dataset features name
{ "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonatasgrosman", "id": 5097052, "login": "jonatasgrosman", "node_id": "MDQ6VXNlcjUwOTcwNTI=", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "type": "User", "url": "https://api.github.com/users/jonatasgrosman" }
[]
closed
false
null
[]
null
[]
"2021-01-13T20:39:04Z"
"2021-01-14T10:29:38Z"
"2021-01-14T10:29:38Z"
CONTRIBUTOR
null
I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good. Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragraphs" with a list of "sentences". I confused myself trying to use the dataset with these names. So I think it's better to change it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1736/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1736.diff", "html_url": "https://github.com/huggingface/datasets/pull/1736", "merged_at": "2021-01-14T10:29:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/1736.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1736" }
true
https://api.github.com/repos/huggingface/datasets/issues/1735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1735/comments
https://api.github.com/repos/huggingface/datasets/issues/1735/events
https://github.com/huggingface/datasets/pull/1735
785,184,740
MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw
1,735
Update add new dataset template
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
[ "Add new \"dataset\"? ;)", "Lol, too used to Transformers ;-)" ]
"2021-01-13T15:08:09Z"
"2021-01-14T15:16:01Z"
"2021-01-14T15:16:00Z"
MEMBER
null
This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1735/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1735.diff", "html_url": "https://github.com/huggingface/datasets/pull/1735", "merged_at": "2021-01-14T15:16:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1735.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1735" }
true
https://api.github.com/repos/huggingface/datasets/issues/1734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1734/comments
https://api.github.com/repos/huggingface/datasets/issues/1734/events
https://github.com/huggingface/datasets/pull/1734
784,956,707
MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz
1,734
Fix empty token bug for `thainer` and `lst20`
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
[]
closed
false
null
[]
null
[]
"2021-01-13T09:55:09Z"
"2021-01-14T10:42:18Z"
"2021-01-14T10:42:18Z"
CONTRIBUTOR
null
add a condition to check if tokens exist before yielding in `thainer` and `lst20`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1734/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1734.diff", "html_url": "https://github.com/huggingface/datasets/pull/1734", "merged_at": "2021-01-14T10:42:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1734" }
true
https://api.github.com/repos/huggingface/datasets/issues/1733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1733/comments
https://api.github.com/repos/huggingface/datasets/issues/1733/events
https://github.com/huggingface/datasets/issues/1733
784,903,002
MDU6SXNzdWU3ODQ5MDMwMDI=
1,733
connection issue with glue, what is the data url for glue?
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost" }
[]
closed
false
null
[]
null
[ "Hello @juliahane, which config of GLUE causes you trouble?\r\nThe URLs are defined in the dataset script source code: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py" ]
"2021-01-13T08:37:40Z"
"2021-08-04T18:13:55Z"
"2021-08-04T18:13:55Z"
NONE
null
Hi my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1733/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1732/comments
https://api.github.com/repos/huggingface/datasets/issues/1732/events
https://github.com/huggingface/datasets/pull/1732
784,874,490
MDExOlB1bGxSZXF1ZXN0NTUzOTkzNTAx
1,732
[GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "events_url": "https://api.github.com/users/mounicam/events{/privacy}", "followers_url": "https://api.github.com/users/mounicam/followers", "following_url": "https://api.github.com/users/mounicam/following{/other_user}", "gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mounicam", "id": 11708999, "login": "mounicam", "node_id": "MDQ6VXNlcjExNzA4OTk5", "organizations_url": "https://api.github.com/users/mounicam/orgs", "received_events_url": "https://api.github.com/users/mounicam/received_events", "repos_url": "https://api.github.com/users/mounicam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mounicam/subscriptions", "type": "User", "url": "https://api.github.com/users/mounicam" }
[]
closed
false
null
[]
null
[ "Thank you for the feedback! I updated the code. " ]
"2021-01-13T07:50:19Z"
"2021-01-14T10:19:41Z"
"2021-01-14T10:19:41Z"
CONTRIBUTOR
null
We want to use TurkCorpus for validation and testing of the sentence simplification task.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1732/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1732/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1732.diff", "html_url": "https://github.com/huggingface/datasets/pull/1732", "merged_at": "2021-01-14T10:19:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/1732.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1732" }
true
https://api.github.com/repos/huggingface/datasets/issues/1731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1731/comments
https://api.github.com/repos/huggingface/datasets/issues/1731/events
https://github.com/huggingface/datasets/issues/1731
784,744,674
MDU6SXNzdWU3ODQ3NDQ2NzQ=
1,731
Couldn't reach swda.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4", "events_url": "https://api.github.com/users/yangp725/events{/privacy}", "followers_url": "https://api.github.com/users/yangp725/followers", "following_url": "https://api.github.com/users/yangp725/following{/other_user}", "gists_url": "https://api.github.com/users/yangp725/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yangp725", "id": 13365326, "login": "yangp725", "node_id": "MDQ6VXNlcjEzMzY1MzI2", "organizations_url": "https://api.github.com/users/yangp725/orgs", "received_events_url": "https://api.github.com/users/yangp725/received_events", "repos_url": "https://api.github.com/users/yangp725/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yangp725/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangp725/subscriptions", "type": "User", "url": "https://api.github.com/users/yangp725" }
[]
closed
false
null
[]
null
[ "Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface/datasets/issues/1641#issuecomment-751571471).\r\nLet me know if this helps !", "Thanks @SBrandeis ,\r\nProblem solved by downloading and installing the latest version datasets." ]
"2021-01-13T02:57:40Z"
"2021-01-13T11:17:40Z"
"2021-01-13T11:17:40Z"
NONE
null
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1731/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1730/comments
https://api.github.com/repos/huggingface/datasets/issues/1730/events
https://github.com/huggingface/datasets/pull/1730
784,617,525
MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0
1,730
Add MNIST dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
[]
"2021-01-12T21:48:02Z"
"2021-01-13T10:19:47Z"
"2021-01-13T10:19:46Z"
MEMBER
null
This PR adds the MNIST dataset to the library.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1730/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1730.diff", "html_url": "https://github.com/huggingface/datasets/pull/1730", "merged_at": "2021-01-13T10:19:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/1730.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1730" }
true
https://api.github.com/repos/huggingface/datasets/issues/1729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1729/comments
https://api.github.com/repos/huggingface/datasets/issues/1729/events
https://github.com/huggingface/datasets/issues/1729
784,565,898
MDU6SXNzdWU3ODQ1NjU4OTg=
1,729
Is there support for Deep learning datasets?
{ "avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4", "events_url": "https://api.github.com/users/pablodz/events{/privacy}", "followers_url": "https://api.github.com/users/pablodz/followers", "following_url": "https://api.github.com/users/pablodz/following{/other_user}", "gists_url": "https://api.github.com/users/pablodz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pablodz", "id": 28235457, "login": "pablodz", "node_id": "MDQ6VXNlcjI4MjM1NDU3", "organizations_url": "https://api.github.com/users/pablodz/orgs", "received_events_url": "https://api.github.com/users/pablodz/received_events", "repos_url": "https://api.github.com/users/pablodz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pablodz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pablodz/subscriptions", "type": "User", "url": "https://api.github.com/users/pablodz" }
[]
closed
false
null
[]
null
[ "Hi @ZurMaD!\r\nThanks for your interest in 🤗 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) guide. New datasets are always very much appreciated 🚀\r\n" ]
"2021-01-12T20:22:41Z"
"2021-03-31T04:24:07Z"
"2021-03-31T04:24:07Z"
NONE
null
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1729/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1728/comments
https://api.github.com/repos/huggingface/datasets/issues/1728/events
https://github.com/huggingface/datasets/issues/1728
784,458,342
MDU6SXNzdWU3ODQ0NTgzNDI=
1,728
Add an entry to an arrow dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4", "events_url": "https://api.github.com/users/ameet-1997/events{/privacy}", "followers_url": "https://api.github.com/users/ameet-1997/followers", "following_url": "https://api.github.com/users/ameet-1997/following{/other_user}", "gists_url": "https://api.github.com/users/ameet-1997/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ameet-1997", "id": 18645407, "login": "ameet-1997", "node_id": "MDQ6VXNlcjE4NjQ1NDA3", "organizations_url": "https://api.github.com/users/ameet-1997/orgs", "received_events_url": "https://api.github.com/users/ameet-1997/received_events", "repos_url": "https://api.github.com/users/ameet-1997/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ameet-1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ameet-1997/subscriptions", "type": "User", "url": "https://api.github.com/users/ameet-1997" }
[]
closed
false
null
[]
null
[ "Hi @ameet-1997,\r\nI think what you are looking for is the `concatenate_datasets` function: https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#concatenate-several-datasets\r\n\r\nFor your use case, I would use the [`map` method](https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#processing-data-with-map) to transform the SQuAD sentences and the `concatenate` the original and mapped dataset.\r\n\r\nLet me know If this helps!", "That's a great idea! Thank you so much!\r\n\r\nWhen I try that solution, I get the following error when I try to concatenate `datasets` and `modified_dataset`. I have also attached the output I get when I print out those two variables. Am I missing something?\r\n\r\nCode:\r\n``` python\r\ncombined_dataset = concatenate_datasets([datasets, modified_dataset])\r\n```\r\n\r\nError:\r\n```\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nOutput:\r\n```\r\n(Pdb) datasets\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['attention_mask', 'input_ids', 'special_tokens_mask'],\r\n num_rows: 493\r\n })\r\n})\r\n(Pdb) modified_dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['attention_mask', 'input_ids', 'special_tokens_mask'],\r\n num_rows: 493\r\n })\r\n})\r\n```\r\n\r\nThe error is stemming from the fact that the attribute `datasets.features` does not exist. Would it not be possible to use `concatenate_datasets` in such a case? Is there an alternate solution?", "You should do `combined_dataset = concatenate_datasets([datasets['train'], modified_dataset['train']])`\r\n\r\nDidn't we talk about returning a Dataset instead of a DatasetDict with load_dataset and no split provided @lhoestq? Not sure it's the way to go but I'm wondering if it's not simpler for some use-cases.", "> Didn't we talk about returning a Dataset instead of a DatasetDict with load_dataset and no split provided @lhoestq? Not sure it's the way to go but I'm wondering if it's not simpler for some use-cases.\r\n\r\nMy opinion is that users should always know in advance what type of objects they're going to get. Otherwise the development workflow on their side is going to be pretty chaotic with sometimes unexpected behaviors.\r\nFor instance is `split=` is not specified it's currently always returning a DatasetDict. And if `split=\"train\"` is given for example it's always returning a Dataset.", "Thanks @thomwolf. Your solution worked!" ]
"2021-01-12T18:01:47Z"
"2021-01-18T19:15:32Z"
"2021-01-18T19:15:32Z"
NONE
null
Is it possible to add an entry to a dataset object? **Motivation: I want to transform the sentences in the dataset and add them to the original dataset** For example, say we have the following code: ``` python from datasets import load_dataset # Load a dataset and print the first examples in the training set squad_dataset = load_dataset('squad') print(squad_dataset['train'][0]) ``` Is it possible to add an entry to `squad_dataset`? Something like the following? ``` python squad_dataset.append({'text': "This is a new sentence"}) ``` The motivation for doing this is that I want to transform the sentences in the squad dataset and add them to the original dataset. If the above doesn't work, is there any other way of achieving the motivation mentioned above? Perhaps by creating a new arrow dataset by using the older one and the transformer sentences?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1728/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1727/comments
https://api.github.com/repos/huggingface/datasets/issues/1727/events
https://github.com/huggingface/datasets/issues/1727
784,435,131
MDU6SXNzdWU3ODQ0MzUxMzE=
1,727
BLEURT score calculation raises UnrecognizedFlagError
{ "avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4", "events_url": "https://api.github.com/users/nadavo/events{/privacy}", "followers_url": "https://api.github.com/users/nadavo/followers", "following_url": "https://api.github.com/users/nadavo/following{/other_user}", "gists_url": "https://api.github.com/users/nadavo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nadavo", "id": 6603920, "login": "nadavo", "node_id": "MDQ6VXNlcjY2MDM5MjA=", "organizations_url": "https://api.github.com/users/nadavo/orgs", "received_events_url": "https://api.github.com/users/nadavo/received_events", "repos_url": "https://api.github.com/users/nadavo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nadavo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nadavo/subscriptions", "type": "User", "url": "https://api.github.com/users/nadavo" }
[]
closed
false
null
[]
null
[ "Upgrading tensorflow to version 2.4.0 solved the issue.", "I still have the same error even with TF 2.4.0.", "And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!", "I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb:\r\n```\r\n!pip install git+https://github.com/google-research/bleurt.git\r\nreferences = [\"foo bar baz\", \"one two three\"]\r\nbleurt_metric = load_metric('bleurt')\r\npredictions = [\"foo bar\", \"four five six\"]\r\nbleurt_metric.compute(predictions=predictions, references=references)\r\n```", "@aleSuglia @oscartackstrom - Are you getting the error when running your code in a Jupyter notebook ?\r\n\r\nI tried reproducing this error again, and was unable to do so from the python command line console in a virtual environment similar to the one I originally used (and unfortunately no longer have access to) when I first got the error. \r\nHowever, I've managed to reproduce the error by running the same code in a Jupyter notebook running a kernel from the same virtual environment.\r\nThis made me suspect that the problem is somehow related to the Jupyter notebook.\r\n\r\nMore environment details:\r\n```\r\nOS: Ubuntu Linux 18.04\r\nconda==4.8.3\r\npython==3.8.5\r\ndatasets==1.3.0\r\ntensorflow==2.4.0\r\nBLEURT==0.0.1\r\nnotebook==6.2.0\r\n```", "This happens when running the notebook on colab. The issue seems to be that colab populates sys.argv with arguments not handled by bleurt.\r\n\r\nRunning this before calling bleurt fixes it:\r\n```\r\nimport sys\r\nsys.argv = sys.argv[:1]\r\n```\r\n\r\nNot the most elegant solution. Perhaps it needs to be fixed in the bleurt code itself rather than huggingface?\r\n\r\nThis is the output of `print(sys.argv)` when running on colab:\r\n```\r\n['/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py', '-f', '/root/.local/share/jupyter/runtime/kernel-a857a78c-44d6-4b9d-b18a-030b858ee327.json']\r\n```", "I got the error when running it from the command line. It looks more like an error that should be fixed in the BLEURT codebase.", "Seems to be a known issue in the bleurt codebase: https://github.com/google-research/bleurt/issues/24.", "Hi, the problem should be solved now.", "Hi @tsellam! I can verify that the issue is indeed fixed now. Thanks!" ]
"2021-01-12T17:27:02Z"
"2022-06-01T16:06:02Z"
"2022-06-01T16:06:02Z"
NONE
null
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`. My environment: ``` python==3.8.5 datasets==1.2.0 tensorflow==2.3.1 cudatoolkit==11.0.221 ``` Test code for reproducing the error: ``` from datasets import load_metric bleurt = load_metric('bleurt') gen_text = "I am walking on the promenade today" ref_text = "I am walking along the promenade on this sunny day" bleurt.compute(predictions=[test_text], references=[test_text]) ``` Error Output: ``` Using default BLEURT-Base checkpoint for sequence maximum length 128. You can use a bigger model for better results with e.g.: datasets.load_metric('bleurt', 'bleurt-large-512'). INFO:tensorflow:Reading checkpoint /home/ubuntu/.cache/huggingface/metrics/bleurt/default/downloads/extracted/9aee35580225730ac5422599f35c4986e4c49cafd08082123342b1019720dac4/bleurt-base-128. INFO:tensorflow:Config file found, reading. INFO:tensorflow:Will load checkpoint bert_custom INFO:tensorflow:Performs basic checks... INFO:tensorflow:... name:bert_custom INFO:tensorflow:... vocab_file:vocab.txt INFO:tensorflow:... bert_config_file:bert_config.json INFO:tensorflow:... do_lower_case:True INFO:tensorflow:... max_seq_length:128 INFO:tensorflow:Creating BLEURT scorer. INFO:tensorflow:Loading model... INFO:tensorflow:BLEURT initialized. --------------------------------------------------------------------------- UnrecognizedFlagError Traceback (most recent call last) <ipython-input-12-8b3f4322318a> in <module> 2 gen_text = "I am walking on the promenade today" 3 ref_text = "I am walking along the promenade on this sunny day" ----> 4 bleurt.compute(predictions=[gen_text], references=[ref_text]) ~/anaconda3/envs/noved/lib/python3.8/site-packages/datasets/metric.py in compute(self, *args, **kwargs) 396 references = self.data["references"] 397 with temp_seed(self.seed): --> 398 output = self._compute(predictions=predictions, references=references, **kwargs) 399 400 if self.buf_writer is not None: ~/.cache/huggingface/modules/datasets_modules/metrics/bleurt/b1de33e1cbbcb1dbe276c887efa1fad68c6aff913885108078fa1ad408908778/bleurt.py in _compute(self, predictions, references) 103 104 def _compute(self, predictions, references): --> 105 scores = self.scorer.score(references=references, candidates=predictions) 106 return {"scores": scores} ~/anaconda3/envs/noved/lib/python3.8/site-packages/bleurt/score.py in score(self, references, candidates, batch_size) 164 """ 165 if not batch_size: --> 166 batch_size = FLAGS.bleurt_batch_size 167 168 candidates, references = list(candidates), list(references) ~/anaconda3/envs/noved/lib/python3.8/site-packages/tensorflow/python/platform/flags.py in __getattr__(self, name) 83 # a flag. 84 if not wrapped.is_parsed(): ---> 85 wrapped(_sys.argv) 86 return wrapped.__getattr__(name) 87 ~/anaconda3/envs/noved/lib/python3.8/site-packages/absl/flags/_flagvalues.py in __call__(self, argv, known_only) 643 for name, value in unknown_flags: 644 suggestions = _helpers.get_flag_suggestions(name, list(self)) --> 645 raise _exceptions.UnrecognizedFlagError( 646 name, value, suggestions=suggestions) 647 UnrecognizedFlagError: Unknown command line flag 'f' ``` Possible Fix: Modify `_compute` method https://github.com/huggingface/datasets/blob/7e64851a12263dc74d41c668167918484c8000ab/metrics/bleurt/bleurt.py#L104 to receive a `batch_size` argument, for example: ``` def _compute(self, predictions, references, batch_size=1): scores = self.scorer.score(references=references, candidates=predictions, batch_size=batch_size) return {"scores": scores} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1727/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1726/comments
https://api.github.com/repos/huggingface/datasets/issues/1726/events
https://github.com/huggingface/datasets/pull/1726
784,336,370
MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4
1,726
Offline loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_data or something?", "Yes I mentioned this in #824 as well. I'm looking into it", "Alright now `csv`, `json`, `text` and `pandas` are \"packaged datasets\", i.e. they're part of the `datasets` package, which makes them available in offline mode without any change in terms of API:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"csv\", data_files=[\"path/to/data.csv\"])\r\n```\r\n\r\nInstead of loading the dataset script from the module cache, it's loaded from inside the `datasets` package.\r\n\r\nI updated the test to still be able to fetch the dummy data files for those datasets from `datasets/{text|csv|pandas|json}/dummy` in the repo.", "Alright now all test pass :)\r\n(I don't thank you windows)", "LGTM! Since you're getting the local script's last modification date anyways do you think it might be a good idea to show it in the warning?", "> LGTM! Since you're getting the local script's last modification date anyways do you think it might be a good idea to show it in the warning?\r\n\r\nYep good idea. I added the date in the warning. For example `(last modified on Mon Nov 30 11:01:56 2020)`" ]
"2021-01-12T15:21:57Z"
"2022-02-15T10:32:10Z"
"2021-01-19T16:42:32Z"
MEMBER
null
As discussed in #824 it would be cool to make the library work in offline mode. Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError. This is because `prepare_module` fetches online for the latest version of the module. To make it work in offline mode one suggestion was to reload the latest local version of the module. I implemented that and I also raise a warning saying that the module that is loaded is the latest local version. ```python logger.warning( f"Using the latest cached version of the module from {cached_module_path} since it " f"couldn't be found locally at {input_path} or remotely ({error_type_that_prevented_reaching_out_remote_stuff})." ) ``` I added tests to make sure it works as expected and I needed to do a few changes in the code to be able to test things properly. In particular I added a parameter `hf_modules_cache` to `init_dynamic_modules` for testing purposes. It makes it possible to have temporary modules caches for testing. I also added a `offline` context utility that allows to test part of the code by making all the requests fail as if there was no internet. Close #824, close #761.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1726/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1726.diff", "html_url": "https://github.com/huggingface/datasets/pull/1726", "merged_at": "2021-01-19T16:42:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1726.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1726" }
true
https://api.github.com/repos/huggingface/datasets/issues/1725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1725/comments
https://api.github.com/repos/huggingface/datasets/issues/1725/events
https://github.com/huggingface/datasets/issues/1725
784,182,273
MDU6SXNzdWU3ODQxODIyNzM=
1,725
load the local dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4", "events_url": "https://api.github.com/users/xinjicong/events{/privacy}", "followers_url": "https://api.github.com/users/xinjicong/followers", "following_url": "https://api.github.com/users/xinjicong/following{/other_user}", "gists_url": "https://api.github.com/users/xinjicong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xinjicong", "id": 41193842, "login": "xinjicong", "node_id": "MDQ6VXNlcjQxMTkzODQy", "organizations_url": "https://api.github.com/users/xinjicong/orgs", "received_events_url": "https://api.github.com/users/xinjicong/received_events", "repos_url": "https://api.github.com/users/xinjicong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xinjicong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xinjicong/subscriptions", "type": "User", "url": "https://api.github.com/users/xinjicong" }
[]
closed
false
null
[]
null
[ "You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit’s not possible to understand it and help you with only this information.", "sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\r\n", "Did you try to follow the instructions in the documentation?\r\nHere: https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files", "thanks a lot \r\ni find that the problem is i dont use vpn...\r\nso i have to keep my net work even if i want to load the local data ?", "We will solve this soon (cf #1724)", "thanks a lot", "Hi! `json` is a packaged dataset now, which means its script comes with the library and doesn't require an internet connection." ]
"2021-01-12T12:12:55Z"
"2022-06-01T16:00:59Z"
"2022-06-01T16:00:59Z"
NONE
null
your guidebook's example is like >>>from datasets import load_dataset >>> dataset = load_dataset('json', data_files='my_file.json') but the first arg is path... so how should i do if i want to load the local dataset for model training? i will be grateful if you can help me handle this problem! thanks a lot!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1725/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1723/comments
https://api.github.com/repos/huggingface/datasets/issues/1723/events
https://github.com/huggingface/datasets/pull/1723
783,982,100
MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1
1,723
ADD S3 support for downloading and uploading processed datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
[]
closed
false
null
[]
null
[ "I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystems and cloud storages such as S3 with a link to the newly created documentation page from me. \r\nI Attach a screenshot of it here. \r\n![screencapture-localhost-5500-docs-build-html-filesystems-html-2021-01-19-17_16_10](https://user-images.githubusercontent.com/32632186/105062131-8d6a5c80-5a7a-11eb-90b0-f6128b758605.png)\r\n" ]
"2021-01-12T07:17:34Z"
"2021-01-26T17:02:08Z"
"2021-01-26T17:02:08Z"
MEMBER
null
# What does this PR do? This PR adds the functionality to load and save `datasets` from and to s3. You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`. You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`. Loading `csv` or `json` datasets from s3 is not implemented. To save/load datasets to s3 you either need to provide an `aws_profile`, which is set up on your machine, per default it uses the `default` profile or you have to pass an `aws_access_key_id` and `aws_secret_access_key`. The implementation was done with the `fsspec` and `boto3`. ### Example `aws_profile` : <details> ```python dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm") load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm") ``` </details> ### Example `aws_access_key_id` and `aws_secret_access_key` : <details> ```python dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_access_key_id="fake_access_key", aws_secret_access_key="fake_secret_key" ) load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_access_key_id="fake_access_key", aws_secret_access_key="fake_secret_key" ) ``` </details> If you want to load a dataset from a public s3 bucket you can pass `anon=True` ### Example `anon=True` : <details> ```python dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm") load_from_disk("s3://moto-mock-s3-bucketdatasets/sdk",anon=True) ``` </details> ### Full Example ```python import datasets dataset = datasets.load_dataset("imdb") print(f"DatasetDict contains {len(dataset)} datasets") print(f"train Dataset has the size of: {len(dataset['train'])}") dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm") remote_dataset = datasets.load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm") print(f"DatasetDict contains {len(remote_dataset)} datasets") print(f"train Dataset has the size of: {len(remote_dataset['train'])}") ``` Related to #878 I would also adjust the documentation after the code would be reviewed, as long as I leave the PR in "draft" status. Something that we can consider is renaming the functions and changing the `_disk` maybe to `_filesystem`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 3, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1723/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1723.diff", "html_url": "https://github.com/huggingface/datasets/pull/1723", "merged_at": "2021-01-26T17:02:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1723.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1723" }
true
https://api.github.com/repos/huggingface/datasets/issues/1724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1724/comments
https://api.github.com/repos/huggingface/datasets/issues/1724/events
https://github.com/huggingface/datasets/issues/1724
784,023,338
MDU6SXNzdWU3ODQwMjMzMzg=
1,724
could not run models on a offline server successfully
{ "avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4", "events_url": "https://api.github.com/users/lkcao/events{/privacy}", "followers_url": "https://api.github.com/users/lkcao/followers", "following_url": "https://api.github.com/users/lkcao/following{/other_user}", "gists_url": "https://api.github.com/users/lkcao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lkcao", "id": 49967236, "login": "lkcao", "node_id": "MDQ6VXNlcjQ5OTY3MjM2", "organizations_url": "https://api.github.com/users/lkcao/orgs", "received_events_url": "https://api.github.com/users/lkcao/received_events", "repos_url": "https://api.github.com/users/lkcao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lkcao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkcao/subscriptions", "type": "User", "url": "https://api.github.com/users/lkcao" }
[]
closed
false
null
[]
null
[ "Transferred to `datasets` based on the stack trace.", "Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/blob/master/datasets/text/text.py.\r\nThen you can change the line 221 of `run_mlm_new.py` into:\r\n```python\r\n datasets = load_dataset('/path/to/text.py', data_files=data_files)\r\n```\r\nWhere `/path/to/text.py` is the path on the server where you saved the `text.py` script.", "We're working on including the local dataset builders (csv, text, json etc.) directly in the `datasets` package so that they can be used offline", "The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon", "> The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\n> You can now use them offline\r\n> \r\n> ```python\r\n> datasets = load_dataset('text', data_files=data_files)\r\n> ```\r\n> \r\n> We'll do a new release soon\r\n\r\nso the new version release now?", "Yes it's been available since datasets 1.3.0 !" ]
"2021-01-12T06:08:06Z"
"2022-10-05T12:39:07Z"
"2022-10-05T12:39:07Z"
NONE
null
Hi, I really need your help about this. I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows: ![image](https://user-images.githubusercontent.com/49967236/104276256-25a88600-546a-11eb-9776-8ec695dfa24e.png) is there anything I can do? Is it possible to download all the things in cache and upload it to the server? Please help me out...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1724/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1722/comments
https://api.github.com/repos/huggingface/datasets/issues/1722/events
https://github.com/huggingface/datasets/pull/1722
783,921,679
MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4
1,722
Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "events_url": "https://api.github.com/users/mounicam/events{/privacy}", "followers_url": "https://api.github.com/users/mounicam/followers", "following_url": "https://api.github.com/users/mounicam/following{/other_user}", "gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mounicam", "id": 11708999, "login": "mounicam", "node_id": "MDQ6VXNlcjExNzA4OTk5", "organizations_url": "https://api.github.com/users/mounicam/orgs", "received_events_url": "https://api.github.com/users/mounicam/received_events", "repos_url": "https://api.github.com/users/mounicam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mounicam/subscriptions", "type": "User", "url": "https://api.github.com/users/mounicam" }
[]
closed
false
null
[]
null
[ "The current version of Wiki-Auto dataset contains a filtered version of the aligned dataset. The commit adds unfiltered versions of the data that can be useful the GEM task participants." ]
"2021-01-12T05:26:04Z"
"2021-01-12T18:14:53Z"
"2021-01-12T17:35:57Z"
CONTRIBUTOR
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1722/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1722.diff", "html_url": "https://github.com/huggingface/datasets/pull/1722", "merged_at": "2021-01-12T17:35:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1722.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1722" }
true
https://api.github.com/repos/huggingface/datasets/issues/1721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1721/comments
https://api.github.com/repos/huggingface/datasets/issues/1721/events
https://github.com/huggingface/datasets/pull/1721
783,828,428
MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5
1,721
[Scientific papers] Mirror datasets zip
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[ "> Nice !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip files ? they're quite big (300KB)\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically", "That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 tokens. ", "Ok thanks :)", "Awesome good to merge for me :-) " ]
"2021-01-12T01:15:40Z"
"2021-01-12T11:49:15Z"
"2021-01-12T11:41:47Z"
MEMBER
null
Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1721/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1721/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1721.diff", "html_url": "https://github.com/huggingface/datasets/pull/1721", "merged_at": "2021-01-12T11:41:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1721.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1721" }
true
https://api.github.com/repos/huggingface/datasets/issues/1720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1720/comments
https://api.github.com/repos/huggingface/datasets/issues/1720/events
https://github.com/huggingface/datasets/pull/1720
783,721,833
MDExOlB1bGxSZXF1ZXN0NTUzMDM0MzYx
1,720
Adding the NorNE dataset for NER
{ "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "events_url": "https://api.github.com/users/versae/events{/privacy}", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/versae", "id": 173537, "login": "versae", "node_id": "MDQ6VXNlcjE3MzUzNw==", "organizations_url": "https://api.github.com/users/versae/orgs", "received_events_url": "https://api.github.com/users/versae/received_events", "repos_url": "https://api.github.com/users/versae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "type": "User", "url": "https://api.github.com/users/versae" }
[]
closed
false
null
[]
null
[ "Quick question, @lhoestq. In this specific dataset, two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. However, I have not found an easy way to implement that. Using splits or configs does not seem appropriate.\r\n", "About the `GPE_LOC` and `GPE_ORG`. The original NorNE paper in which they published the dataset, does an evaluation on three different NER tag sets, one considering `GPE_LOC` and `GPE_ORG` as they are, another changing them to be just `GPE`, and another one by changing it to become `LOC` and `ORG`. The called these sets, `norne-full`, `norne-7`, and `norne-9`. What I would like is to provide a way for the user of this dataset to get `norne-7` and `norne-9` without having to duplicate the code.", "Ok I see !\r\nI guess you can have three configurations `norne-full`, `norne-7` and `norne-9`.\r\nEach config can have different feature types. You can simply check for the `self.config.name` in the `_info(self)` method and pick the right ClassLabel names accordingly. And then in `_generate_examples` as well you can check for `self.config.name` to know how to process the labels to yield either GPE_LOC/GPE_ORG, GPE or LOC/ORG", "But I'm already using the configurations for the different language\nvarieties. So you propose having something like `bokmaal`, `bokmaal-7`,\netc? Would there be a different way? If not, I'd be fine the corpus as it\nis until we come up with a solution. Thanks in any case.\n\n--\nSent using a cell-phone, so sorry for the typos and wrong auto-corrections.\n\nOn Tue, Jan 19, 2021, 4:56 PM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Ok I see !\n> I guess you can have three configurations norne-full, norne-7 and norne-9.\n> Each config can have different feature types. You can simply check for the\n> self.config.name in the _info(self) method and pick the right ClassLabel\n> names accordingly. And then in _generate_examples as well you can check\n> for self.config.name to know how to process the labels to yield either\n> GPE_LOC/GPE_ORG, GPE or LOC/ORG\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/1720#issuecomment-762936612>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AABKLYOWNDBD76WZPJHFCWLS2WTTHANCNFSM4V6GSUQA>\n> .\n>\n", "The first option about having configurations like `bokmaal-7`, `bokmaal-9` etc. would definitely work.\r\n\r\nA second option would be to add a parameter `ner_tags_set` to `NorneConfig` and then one could load them with\r\n```python\r\nbokmaal_full = load_dataset(\"norne\", \"bokmaal\", ner_tags_set=\"norne-full\")\r\n```\r\nfor example.\r\n\r\nWhat do you think ?", "Hi @versae have you had a chance to consider one of the two options for the config ?\r\nI think both are ok but I have a small preference for the first one since it's simpler to implement.\r\n\r\nFeel free to ping me if you have questions or if I can help :) ", "Hi @lhoestq. Agree, option 1 seems easier to implement. Just haven't had bandwidth to get to it yet. Hopefully starting next week I'll be able to update the PR.", "Hi @versae ! Did you manage to add the configurations ? Let me know if we can help you on this", "Hi @lhoestq, I do actually have to code ready, just need to generate the dummy data for it. ", "One thing I don't know how to do is to make `_info(self)` return the different NER tags in its `DatasetInfo` object depending on the specific config.", "OK, I think it's ready now.", "Closing this one and opening a new one with a cleaner commit log.", "All set now in #2154." ]
"2021-01-11T21:34:13Z"
"2021-03-31T14:23:49Z"
"2021-03-31T14:13:17Z"
CONTRIBUTOR
null
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1720/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1720/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1720.diff", "html_url": "https://github.com/huggingface/datasets/pull/1720", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1720.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1720" }
true
https://api.github.com/repos/huggingface/datasets/issues/1719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1719/comments
https://api.github.com/repos/huggingface/datasets/issues/1719/events
https://github.com/huggingface/datasets/pull/1719
783,557,542
MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4
1,719
Fix column list comparison in transmit format
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-11T17:23:56Z"
"2021-01-11T18:45:03Z"
"2021-01-11T18:45:02Z"
MEMBER
null
As noticed in #1718 the cache might not reload the cache files when new columns were added. This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled column names. I fixed that by sorting the columns for the comparison and added a test. To properly test that I added a third column `col_3` to the dummy_dataset used for tests.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1719/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1719/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1719.diff", "html_url": "https://github.com/huggingface/datasets/pull/1719", "merged_at": "2021-01-11T18:45:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/1719.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1719" }
true
https://api.github.com/repos/huggingface/datasets/issues/1718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1718/comments
https://api.github.com/repos/huggingface/datasets/issues/1718/events
https://github.com/huggingface/datasets/issues/1718
783,474,753
MDU6SXNzdWU3ODM0NzQ3NTM=
1,718
Possible cache miss in datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4", "events_url": "https://api.github.com/users/ofirzaf/events{/privacy}", "followers_url": "https://api.github.com/users/ofirzaf/followers", "following_url": "https://api.github.com/users/ofirzaf/following{/other_user}", "gists_url": "https://api.github.com/users/ofirzaf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ofirzaf", "id": 18296312, "login": "ofirzaf", "node_id": "MDQ6VXNlcjE4Mjk2MzEy", "organizations_url": "https://api.github.com/users/ofirzaf/orgs", "received_events_url": "https://api.github.com/users/ofirzaf/received_events", "repos_url": "https://api.github.com/users/ofirzaf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ofirzaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ofirzaf/subscriptions", "type": "User", "url": "https://api.github.com/users/ofirzaf" }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting list would not always be in the same order and therefore the function that computes the hash used by the cache would not always return the same result.\r\nI'm opening a PR to fix this.\r\n\r\nAlso we plan to do a new release in the coming days so you can expect the fix to be available soon.\r\nNote that you can still specify `cache_file_name=` in the second `map()` call to name the cache file yourself if you want to.", "Thanks for the fast reply, waiting for the fix :)\r\n\r\nI tried to use `cache_file_names` and wasn't sure how, I tried to give it the following:\r\n```\r\ntokenized_datasets = tokenized_datasets.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=60,\r\n load_from_cache_file=True,\r\n cache_file_names={k: f'.cache/{str(k)}' for k in tokenized_datasets}\r\n)\r\n```\r\n\r\nand got an error:\r\n```\r\nmultiprocess.pool.RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/venv/lib/python3.6/site-packages/multiprocess/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/venv/lib/python3.6/site-packages/datasets/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1491, in _map_single\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n File \"/usr/lib/python3.6/tempfile.py\", line 690, in NamedTemporaryFile\r\n (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"/usr/lib/python3.6/tempfile.py\", line 401, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nFileNotFoundError: [Errno 2] No such file or directory: '_00000_of_00060.cache/tmpsvszxtop'\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 48, in <module>\r\n cache_file_names={k: f'.cache/{str(k)}' for k in tokenized_datasets}\r\n File \"/venv/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 303, in map\r\n for k, dataset in self.items()\r\n File \"/venv/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 303, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1317, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1317, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/venv/lib/python3.6/site-packages/multiprocess/pool.py\", line 644, in get\r\n raise self._value\r\nFileNotFoundError: [Errno 2] No such file or directory: '_00000_of_00060.cache/tmpsvszxtop'\r\n```\r\n", "The documentation says\r\n```\r\ncache_file_names (`Optional[Dict[str, str]]`, defaults to `None`): Provide the name of a cache file to use to store the\r\n results of the computation instead of the automatically generated cache file name.\r\n You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.\r\n```\r\nWhat is expected is simply the name of a file, not a path. The file will be located in the cache directory of the `wikitext` dataset. You can try again with something like\r\n```python\r\ncache_file_names = {k: f'tokenized_and_grouped_{str(k)}' for k in tokenized_datasets}\r\n```", "Managed to get `cache_file_names` working and caching works well with it\r\nHad to make a small modification for it to work:\r\n```\r\ncache_file_names = {k: f'tokenized_and_grouped_{str(k)}.arrow' for k in tokenized_datasets}\r\n```", "Another comment on `cache_file_names`, it doesn't save the produced cached files in the dataset's cache folder, it requires to give a path to an existing directory for it to work.\r\nI can confirm that this is how it works in `datasets==1.1.3`", "Oh yes indeed ! Maybe we need to update the docstring to mention that it is a path", "I fixed the docstring. Hopefully this is less confusing now: https://github.com/huggingface/datasets/commit/42ccc0012ba8864e6db1392430100f350236183a", "I upgraded to the latest version and I encountered some strange behaviour, the script I posted in the OP doesn't trigger recalculation, however, if I add the following change it does trigger partial recalculation, I am not sure if its something wrong on my machine or a bug:\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\n\r\ndatasets = load_dataset('wikitext', 'wikitext-103-raw-v1')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)\r\n\r\ncolumn_names = datasets[\"train\"].column_names\r\ntext_column_name = \"text\" if \"text\" in column_names else column_names[0]\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n# CHANGE\r\nprint('hello')\r\n# CHANGE\r\n\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n...\r\n```\r\nI am using datasets in the `run_mlm.py` script in the transformers examples and I found that if I change the script without touching any of the preprocessing. it still triggers recalculation which is very weird\r\n\r\nEdit: accidently clicked the close issue button ", "This is because the `group_texts` line definition changes (it is defined 3 lines later than in the previous call). Currently if a function is moved elsewhere in a script we consider it to be different.\r\n\r\nNot sure this is actually a good idea to keep this behavior though. We had this as a security in the early development of the lib but now the recursive hashing of objects is robust so we can probably remove that.\r\nMoreover we're already ignoring the line definition for lambda functions.", "I opened a PR to change this, let me know what you think.", "Sounds great, thank you for your quick responses and help! Looking forward for the next release.", "I am having a similar issue where only the grouped files are loaded from cache while the tokenized ones aren't. I can confirm both datasets are being stored to file, but only the grouped version is loaded from cache. Not sure what might be going on. But I've tried to remove all kinds of non deterministic behaviour, but still no luck. Thanks for the help!\r\n\r\n\r\n```python\r\n # Datasets\r\n train = sorted(glob(args.data_dir + '*.{}'.format(args.ext)))\r\n if args.dev_split >= len(train):\r\n raise ValueError(\"Not enough dev files\")\r\n dev = []\r\n state = random.Random(1001)\r\n for _ in range(args.dev_split):\r\n dev.append(train.pop(state.randint(0, len(train) - 1)))\r\n\r\n max_seq_length = min(args.max_seq_length, tokenizer.model_max_length)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples['text'], return_special_tokens_mask=True)\r\n\r\n def group_texts(examples):\r\n # Concatenate all texts from our dataset and generate chunks of max_seq_length\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # Truncate (not implementing padding)\r\n total_length = (total_length // max_seq_length) * max_seq_length\r\n # Split by chunks of max_seq_length\r\n result = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n return result\r\n\r\n datasets = load_dataset(\r\n 'text', name='DBNL', data_files={'train': train[:10], 'dev': dev[:5]}, \r\n cache_dir=args.data_cache_dir)\r\n datasets = datasets.map(tokenize_function, \r\n batched=True, remove_columns=['text'], \r\n cache_file_names={k: os.path.join(args.data_cache_dir, f'{k}-tokenized') for k in datasets},\r\n load_from_cache_file=not args.overwrite_cache)\r\n datasets = datasets.map(group_texts, \r\n batched=True,\r\n cache_file_names={k: os.path.join(args.data_cache_dir, f'{k}-grouped') for k in datasets},\r\n load_from_cache_file=not args.overwrite_cache)\r\n```\r\n\r\nAnd this is the log\r\n\r\n```\r\n04/26/2021 10:26:59 - WARNING - datasets.builder - Using custom data configuration DBNL-f8d988ad33ccf2c1\r\n04/26/2021 10:26:59 - WARNING - datasets.builder - Reusing dataset text (/home/manjavacasema/data/.cache/text/DBNL-f8d988ad33ccf2c1/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 21.07ba/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:01<00:00, 24.28ba/s]\r\n04/26/2021 10:27:01 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/manjavacasema/data/.cache/train-grouped\r\n04/26/2021 10:27:01 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/manjavacasema/data/.cache/dev-grouped\r\n```\r\n", "Hi ! What tokenizer are you using ?", "It's the ByteLevelBPETokenizer", "This error happened to me too, when I tried to supply my own fingerprint to `map()` via the `new_fingerprint` arg.\r\n\r\nEdit: realized it was because my path was weird and had colons and brackets and slashes in it, since one of the variable values I included in the fingerprint was a dataset split like \"train[:10%]\". I fixed it with [this solution](https://stackoverflow.com/a/13593932/2287177) from StackOverflow to just remove those invalid characters from the fingerprint.", "Good catch @jxmorris12, maybe we should do additional checks on the valid characters for fingerprints ! Would you like to contribute this ?\r\n\r\nI think this can be added here, when we set the fingerprint(s) that are passed `map`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/25bb7c9cbf519fbbf9abf3898083b529e7762705/src/datasets/fingerprint.py#L449-L454\r\n\r\nmaybe something like\r\n```python\r\nif kwargs.get(fingerprint_name) is None:\r\n ...\r\nelse:\r\n # In this case, it's the user who specified the fingerprint manually:\r\n # we need to make sure it's a valid hash\r\n validate_fingerprint(kwargs[fingerprint_name])\r\n```\r\n\r\nOtherwise I can open a PR later", "I opened a PR here to add the fingerprint validation: https://github.com/huggingface/datasets/pull/4587\r\n\r\nEDIT: merged :)", "thank you!" ]
"2021-01-11T15:37:31Z"
"2022-06-29T14:54:42Z"
"2021-01-26T02:47:59Z"
NONE
null
Hi, I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache. I have attached an example script that for me reproduces the problem. In the attached example the second map function always recomputes instead of loading from cache. Is this a bug or am I doing something wrong? Is there a way for fix this and avoid all the recomputation? Thanks Edit: transformers==3.5.1 datasets==1.2.0 ``` from datasets import load_dataset from transformers import AutoTokenizer datasets = load_dataset('wikitext', 'wikitext-103-raw-v1') tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True) column_names = datasets["train"].column_names text_column_name = "text" if "text" in column_names else column_names[0] def tokenize_function(examples): return tokenizer(examples[text_column_name], return_special_tokens_mask=True) tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=60, remove_columns=[text_column_name], load_from_cache_file=True, ) max_seq_length = tokenizer.model_max_length def group_texts(examples): # Concatenate all texts. concatenated_examples = { k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // max_seq_length) * max_seq_length # Split by chunks of max_len. result = { k: [t[i: i + max_seq_length] for i in range(0, total_length, max_seq_length)] for k, t in concatenated_examples.items() } return result tokenized_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=60, load_from_cache_file=True, ) print(tokenized_datasets) print('finished') ```
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1718/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1717/comments
https://api.github.com/repos/huggingface/datasets/issues/1717/events
https://github.com/huggingface/datasets/issues/1717
783,074,255
MDU6SXNzdWU3ODMwNzQyNTU=
1,717
SciFact dataset - minor changes
{ "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "events_url": "https://api.github.com/users/dwadden/events{/privacy}", "followers_url": "https://api.github.com/users/dwadden/followers", "following_url": "https://api.github.com/users/dwadden/following{/other_user}", "gists_url": "https://api.github.com/users/dwadden/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwadden", "id": 3091916, "login": "dwadden", "node_id": "MDQ6VXNlcjMwOTE5MTY=", "organizations_url": "https://api.github.com/users/dwadden/orgs", "received_events_url": "https://api.github.com/users/dwadden/received_events", "repos_url": "https://api.github.com/users/dwadden/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwadden/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwadden/subscriptions", "type": "User", "url": "https://api.github.com/users/dwadden" }
[]
closed
false
null
[]
null
[ "Hi Dave,\r\nYou are more than welcome to open a PR to make these changes! 🤗\r\nYou will find the relevant information about opening a PR in the [contributing guide](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) and in the [dataset addition guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).\r\n\r\nPinging also @lhoestq for the Google cloud matter.", "> I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?\r\n\r\nSure ! Also feel free to ping us for reviews or if we can help :)\r\n\r\n> It also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https://github.com/huggingface/datasets/blob/master/datasets/scifact/scifact.py). Can you help me update the version on gcloud?\r\n\r\nWhat makes you think that ?\r\nAfaik there's no scifact on our google storage\r\n", "\r\n\r\n> > I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?\r\n> \r\n> Sure ! Also feel free to ping us for reviews or if we can help :)\r\n> \r\nOK! We're organizing a [shared task](https://sdproc.org/2021/sharedtasks.html#sciver) based on the dataset, and I made some updates and changed the download URL - so the current code points to a dead URL. I'll update appropriately once the task is finalized and make a PR.\r\n\r\n> > It also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https://github.com/huggingface/datasets/blob/master/datasets/scifact/scifact.py). Can you help me update the version on gcloud?\r\n> \r\n> What makes you think that ?\r\n> Afaik there's no scifact on our google storage\r\n\r\nYou're right, I had the data cached on my machine somewhere. \r\n\r\n", "I opened a PR about this: https://github.com/huggingface/datasets/pull/1780. Closing this issue, will continue there." ]
"2021-01-11T05:26:40Z"
"2021-01-26T02:52:17Z"
"2021-01-26T02:52:17Z"
CONTRIBUTOR
null
Hi, SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated! I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this? It also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https://github.com/huggingface/datasets/blob/master/datasets/scifact/scifact.py). Can you help me update the version on gcloud? Thanks, Dave
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1717/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1716/comments
https://api.github.com/repos/huggingface/datasets/issues/1716/events
https://github.com/huggingface/datasets/pull/1716
782,819,006
MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5
1,716
Add Hatexplain Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4", "events_url": "https://api.github.com/users/kushal2000/events{/privacy}", "followers_url": "https://api.github.com/users/kushal2000/followers", "following_url": "https://api.github.com/users/kushal2000/following{/other_user}", "gists_url": "https://api.github.com/users/kushal2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kushal2000", "id": 48222101, "login": "kushal2000", "node_id": "MDQ6VXNlcjQ4MjIyMTAx", "organizations_url": "https://api.github.com/users/kushal2000/orgs", "received_events_url": "https://api.github.com/users/kushal2000/received_events", "repos_url": "https://api.github.com/users/kushal2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kushal2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kushal2000/subscriptions", "type": "User", "url": "https://api.github.com/users/kushal2000" }
[]
closed
false
null
[]
null
[]
"2021-01-10T13:30:01Z"
"2021-01-18T14:21:42Z"
"2021-01-18T14:21:42Z"
CONTRIBUTOR
null
Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1716/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1716.diff", "html_url": "https://github.com/huggingface/datasets/pull/1716", "merged_at": "2021-01-18T14:21:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/1716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1716" }
true
https://api.github.com/repos/huggingface/datasets/issues/1715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1715/comments
https://api.github.com/repos/huggingface/datasets/issues/1715/events
https://github.com/huggingface/datasets/pull/1715
782,754,441
MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5
1,715
add Korean intonation-aided intention identification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
[]
"2021-01-10T06:29:04Z"
"2021-09-17T16:54:13Z"
"2021-01-12T17:14:33Z"
MEMBER
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1715/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1715.diff", "html_url": "https://github.com/huggingface/datasets/pull/1715", "merged_at": "2021-01-12T17:14:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1715" }
true
https://api.github.com/repos/huggingface/datasets/issues/1714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1714/comments
https://api.github.com/repos/huggingface/datasets/issues/1714/events
https://github.com/huggingface/datasets/pull/1714
782,416,276
MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0
1,714
Adding adversarialQA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/15869827?v=4", "events_url": "https://api.github.com/users/maxbartolo/events{/privacy}", "followers_url": "https://api.github.com/users/maxbartolo/followers", "following_url": "https://api.github.com/users/maxbartolo/following{/other_user}", "gists_url": "https://api.github.com/users/maxbartolo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxbartolo", "id": 15869827, "login": "maxbartolo", "node_id": "MDQ6VXNlcjE1ODY5ODI3", "organizations_url": "https://api.github.com/users/maxbartolo/orgs", "received_events_url": "https://api.github.com/users/maxbartolo/received_events", "repos_url": "https://api.github.com/users/maxbartolo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxbartolo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxbartolo/subscriptions", "type": "User", "url": "https://api.github.com/users/maxbartolo" }
[]
closed
false
null
[]
null
[ "Oh that's a really cool one, we'll review/merge it soon!\r\n\r\nIn the meantime, do you have any specific positive/negative feedback on the process of adding a datasets Max?\r\nDid you follow the instruction in the [detailed step-by-step](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)?", "Thanks Thom, been a while, hope all is well!\r\n\r\nYes, I followed the step by step instructions and found them pretty straightforward. The only things I wasn't sure of were what should go into the YAML tags field for the dataset card, and whether there was a list of options somewhere (maybe akin to the metrics?) of the possible supported tasks. I found the rest very intuitive and the automated metadata and dummy data generation very handy. Thanks!", "Good point! pinging @yjernite here so he can improve this part!", "@maxbartolo cool addition!\r\n\r\nFor the YAML tag, you should use the tagging app we provide to choose from a drop-down menu:\r\nhttps://github.com/huggingface/datasets-tagging\r\n\r\nThe process is described toward the end of the [step-by-step guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card), do you have any suggestions for making it easier to find?\r\n\r\nOtherwise, the dataset card is really cool, thanks for making it so complete!\r\n", "@yjernite\r\n\r\nThanks, YAML tags added. I think my main issue was with the flow of the [step-by-step guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). For example, the [card creator](https://huggingface.co/datasets/card-creator/) is introduced in Step 4, right after creating an empty directory for your dataset. The first field it requires are the YAML tags, which (at least for me) was the last step of the process.\r\n\r\nI'd suggest having the guide structured in the same order as the creation process. For me it was something like:\r\n- Step 1: Preparing your env\r\n- Step 2: Write the loading/processing code\r\n- Step 3: Automatically generate dummy data and `dataset_infos.json`\r\n- Step 4: Tag the dataset\r\n- Step 5: Write the dataset card using the [card creator](https://huggingface.co/datasets/card-creator/)\r\n- Step 6: Open a Pull Request on the main HuggingFace repo and share your work!!\r\n\r\nThanks again!" ]
"2021-01-08T21:46:09Z"
"2021-01-13T16:05:24Z"
"2021-01-13T16:05:24Z"
CONTRIBUTOR
null
Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1714/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1714.diff", "html_url": "https://github.com/huggingface/datasets/pull/1714", "merged_at": "2021-01-13T16:05:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1714" }
true
https://api.github.com/repos/huggingface/datasets/issues/1713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1713/comments
https://api.github.com/repos/huggingface/datasets/issues/1713/events
https://github.com/huggingface/datasets/issues/1713
782,337,723
MDU6SXNzdWU3ODIzMzc3MjM=
1,713
Installation using conda
{ "avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4", "events_url": "https://api.github.com/users/pranav-s/events{/privacy}", "followers_url": "https://api.github.com/users/pranav-s/followers", "following_url": "https://api.github.com/users/pranav-s/following{/other_user}", "gists_url": "https://api.github.com/users/pranav-s/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pranav-s", "id": 9393002, "login": "pranav-s", "node_id": "MDQ6VXNlcjkzOTMwMDI=", "organizations_url": "https://api.github.com/users/pranav-s/orgs", "received_events_url": "https://api.github.com/users/pranav-s/received_events", "repos_url": "https://api.github.com/users/pranav-s/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pranav-s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pranav-s/subscriptions", "type": "User", "url": "https://api.github.com/users/pranav-s" }
[]
closed
false
null
[]
null
[ "Yes indeed the idea is to have the next release on conda cc @LysandreJik ", "Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.", "I think we can have `datasets` on conda by next week. Will see what I can do!", "Thank you. Looking forward to it.", "`datasets` has been added to the huggingface channel thanks to @LysandreJik :)\r\nIt depends on conda-forge though\r\n\r\n```\r\nconda install -c huggingface -c conda-forge datasets\r\n```" ]
"2021-01-08T19:12:15Z"
"2021-09-17T12:47:40Z"
"2021-09-17T12:47:40Z"
NONE
null
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and using pip in a conda environment is generally a bad idea in my experience.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1713/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1712/comments
https://api.github.com/repos/huggingface/datasets/issues/1712/events
https://github.com/huggingface/datasets/pull/1712
782,313,097
MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4
1,712
Silicone
{ "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eusip", "id": 1551356, "login": "eusip", "node_id": "MDQ6VXNlcjE1NTEzNTY=", "organizations_url": "https://api.github.com/users/eusip/orgs", "received_events_url": "https://api.github.com/users/eusip/received_events", "repos_url": "https://api.github.com/users/eusip/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "type": "User", "url": "https://api.github.com/users/eusip" }
[]
closed
false
null
[]
null
[ "When should we expect to see our dataset appear in the search dropdown at huggingface.co?", "Hi @eusip,\r\n\r\n> When should we expect to see our dataset appear in the search dropdown at huggingface.co?\r\n\r\nwhen this PR is merged.", "Thanks!", "I've implemented all the changes requested by @lhoestq but I made the mistake of trying to change the remote branch name. \r\n\r\nHopefully the changes are seen on your end as both branches `silicone` and `main` should be up-to-date.", "It looks like the PR includes changes about many other files than the ones for Silicone (+30,000 line changes)\r\n\r\nMaybe you can try to create another branch and another PR ?", "> It looks like the PR includes changes about many other files than the ones for Silicone (+30,000 line changes)\r\n> \r\n> Maybe you can try to create another branch and another PR ?\r\n\r\nSure. I will make a new pull request." ]
"2021-01-08T18:24:18Z"
"2021-01-21T14:12:37Z"
"2021-01-21T10:31:11Z"
CONTRIBUTOR
null
My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1712/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1712/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1712.diff", "html_url": "https://github.com/huggingface/datasets/pull/1712", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1712.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1712" }
true
https://api.github.com/repos/huggingface/datasets/issues/1711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1711/comments
https://api.github.com/repos/huggingface/datasets/issues/1711/events
https://github.com/huggingface/datasets/pull/1711
782,129,083
MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2
1,711
Fix windows path scheme in cached path
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-08T13:45:56Z"
"2021-01-11T09:23:20Z"
"2021-01-11T09:23:19Z"
MEMBER
null
As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete. I fixed this and added tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1711/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1711.diff", "html_url": "https://github.com/huggingface/datasets/pull/1711", "merged_at": "2021-01-11T09:23:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/1711.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1711" }
true
https://api.github.com/repos/huggingface/datasets/issues/1710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1710/comments
https://api.github.com/repos/huggingface/datasets/issues/1710/events
https://github.com/huggingface/datasets/issues/1710
781,914,951
MDU6SXNzdWU3ODE5MTQ5NTE=
1,710
IsADirectoryError when trying to download C4
{ "avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4", "events_url": "https://api.github.com/users/fredriko/events{/privacy}", "followers_url": "https://api.github.com/users/fredriko/followers", "following_url": "https://api.github.com/users/fredriko/following{/other_user}", "gists_url": "https://api.github.com/users/fredriko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fredriko", "id": 5771366, "login": "fredriko", "node_id": "MDQ6VXNlcjU3NzEzNjY=", "organizations_url": "https://api.github.com/users/fredriko/orgs", "received_events_url": "https://api.github.com/users/fredriko/received_events", "repos_url": "https://api.github.com/users/fredriko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fredriko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fredriko/subscriptions", "type": "User", "url": "https://api.github.com/users/fredriko" }
[]
closed
false
null
[]
null
[ "I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files.", "Fixed once processed data is used instead:\r\n- #2575" ]
"2021-01-08T07:31:30Z"
"2022-08-04T11:56:10Z"
"2022-08-04T11:55:04Z"
NONE
null
**TLDR**: I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure. How can the problem be fixed? **VERBOSE**: I use Python version 3.7 and have the following dependencies listed in my project: ``` datasets==1.2.0 apache-beam==2.26.0 ``` When running the following code, where `/data/huggingface/unpacked/` contains a single unzipped `wet.paths` file manually downloaded as per the instructions for C4: ``` from datasets import load_dataset load_dataset("c4", "en", data_dir="/data/huggingface/unpacked", beam_runner='DirectRunner') ``` I get the following stacktrace: ``` /Users/fredriko/venv/misc/bin/python /Users/fredriko/source/misc/main.py Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/fredriko/.cache/huggingface/datasets/c4/en/2.3.0/8304cf264cc42bdebcb13fca4b9cb36368a96f557d36f9dc969bebbe2568b283... Traceback (most recent call last): File "/Users/fredriko/source/misc/main.py", line 3, in <module> load_dataset("c4", "en", data_dir="/data/huggingface/unpacked", beam_runner='DirectRunner') File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/builder.py", line 1066, in _download_and_prepare pipeline=pipeline, File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/builder.py", line 582, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/fredriko/.cache/huggingface/modules/datasets_modules/datasets/c4/8304cf264cc42bdebcb13fca4b9cb36368a96f557d36f9dc969bebbe2568b283/c4.py", line 190, in _split_generators file_paths = dl_manager.download_and_extract(files_to_download) File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 258, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 189, in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 117, in _record_sizes_checksums self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict(path) File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 80, in get_size_checksum_dict with open(path, "rb") as f: IsADirectoryError: [Errno 21] Is a directory: '/' Process finished with exit code 1 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1710/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1709/comments
https://api.github.com/repos/huggingface/datasets/issues/1709/events
https://github.com/huggingface/datasets/issues/1709
781,875,640
MDU6SXNzdWU3ODE4NzU2NDA=
1,709
Databases
{ "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}", "followers_url": "https://api.github.com/users/JimmyJim1/followers", "following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}", "gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JimmyJim1", "id": 68724553, "login": "JimmyJim1", "node_id": "MDQ6VXNlcjY4NzI0NTUz", "organizations_url": "https://api.github.com/users/JimmyJim1/orgs", "received_events_url": "https://api.github.com/users/JimmyJim1/received_events", "repos_url": "https://api.github.com/users/JimmyJim1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions", "type": "User", "url": "https://api.github.com/users/JimmyJim1" }
[]
closed
false
null
[]
null
[]
"2021-01-08T06:14:03Z"
"2021-01-08T09:00:08Z"
"2021-01-08T09:00:08Z"
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1709/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1708/comments
https://api.github.com/repos/huggingface/datasets/issues/1708/events
https://github.com/huggingface/datasets/issues/1708
781,631,455
MDU6SXNzdWU3ODE2MzE0NTU=
1,708
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
{ "avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4", "events_url": "https://api.github.com/users/Louiejay54/events{/privacy}", "followers_url": "https://api.github.com/users/Louiejay54/followers", "following_url": "https://api.github.com/users/Louiejay54/following{/other_user}", "gists_url": "https://api.github.com/users/Louiejay54/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Louiejay54", "id": 77126849, "login": "Louiejay54", "node_id": "MDQ6VXNlcjc3MTI2ODQ5", "organizations_url": "https://api.github.com/users/Louiejay54/orgs", "received_events_url": "https://api.github.com/users/Louiejay54/received_events", "repos_url": "https://api.github.com/users/Louiejay54/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Louiejay54/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Louiejay54/subscriptions", "type": "User", "url": "https://api.github.com/users/Louiejay54" }
[]
closed
false
null
[]
null
[]
"2021-01-07T21:45:24Z"
"2021-01-08T09:00:01Z"
"2021-01-08T09:00:01Z"
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1708/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1707/comments
https://api.github.com/repos/huggingface/datasets/issues/1707/events
https://github.com/huggingface/datasets/pull/1707
781,507,545
MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2
1,707
Added generated READMEs for datasets that were missing one.
{ "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "events_url": "https://api.github.com/users/madlag/events{/privacy}", "followers_url": "https://api.github.com/users/madlag/followers", "following_url": "https://api.github.com/users/madlag/following{/other_user}", "gists_url": "https://api.github.com/users/madlag/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/madlag", "id": 272253, "login": "madlag", "node_id": "MDQ6VXNlcjI3MjI1Mw==", "organizations_url": "https://api.github.com/users/madlag/orgs", "received_events_url": "https://api.github.com/users/madlag/received_events", "repos_url": "https://api.github.com/users/madlag/repos", "site_admin": false, "starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madlag/subscriptions", "type": "User", "url": "https://api.github.com/users/madlag" }
[]
closed
false
null
[]
null
[ "Looks like we need to trim the ones with too many configs, will look into it tomorrow!" ]
"2021-01-07T18:10:06Z"
"2021-01-18T14:32:33Z"
"2021-01-18T14:32:33Z"
CONTRIBUTOR
null
This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible. Code is available here for the moment: https://github.com/madlag/datasets_readme_generator . We will move it to a Hugging Face repository and to https://huggingface.co/datasets/card-creator/ later.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1707/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1707/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1707.diff", "html_url": "https://github.com/huggingface/datasets/pull/1707", "merged_at": "2021-01-18T14:32:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1707.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1707" }
true
https://api.github.com/repos/huggingface/datasets/issues/1706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1706/comments
https://api.github.com/repos/huggingface/datasets/issues/1706/events
https://github.com/huggingface/datasets/issues/1706
781,494,476
MDU6SXNzdWU3ODE0OTQ0NzY=
1,706
Error when downloading a large dataset on slow connection.
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello" }
[]
open
false
null
[]
null
[ "Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=\"force_redownload\")\r\n```" ]
"2021-01-07T17:48:15Z"
"2021-01-13T10:35:02Z"
null
CONTRIBUTOR
null
I receive the following error after about an hour trying to download the `openwebtext` dataset. The code used is: ```python import datasets datasets.load_dataset("openwebtext") ``` > Traceback (most recent call last): [4/28] > File "<stdin>", line 1, in <module> > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/load.py", line 610, in load_dataset > ignore_verifications=ignore_verifications, > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/builder.py", line 515, in download_and_prepare > dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/builder.py", line 570, in _download_and_prepare > split_generators = self._split_generators(dl_manager, **split_generators_kwargs) > File "/home/lucadiliello/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02/openwebtext.py", line 62, in _split_generators > dl_dir = dl_manager.download_and_extract(_URL) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract > return self.extract(self.download(url_or_urls)) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 235, in extract > num_proc=num_proc, > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested > return function(data_struct) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path > tar_file.extractall(output_path_extracted) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2000, in extractall > numeric_owner=numeric_owner) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2042, in extract > numeric_owner=numeric_owner) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2112, in _extract_member > self.makefile(tarinfo, targetpath) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2161, in makefile > copyfileobj(source, target, tarinfo.size, ReadError, bufsize) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 253, in copyfileobj > buf = src.read(remainder) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/lzma.py", line 200, in read > return self._buffer.read(size) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/_compression.py", line 68, in readinto > data = self.read(len(byte_view)) > File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/_compression.py", line 99, in read > raise EOFError("Compressed file ended before the " > EOFError: Compressed file ended before the end-of-stream marker was reached
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1706/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1705/comments
https://api.github.com/repos/huggingface/datasets/issues/1705/events
https://github.com/huggingface/datasets/pull/1705
781,474,949
MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4
1,705
Add information about caching and verifications in "Load a Dataset" docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[]
"2021-01-07T17:18:44Z"
"2021-01-12T14:08:01Z"
"2021-01-12T14:08:01Z"
CONTRIBUTOR
null
Related to #215. Missing improvements from @lhoestq's #1703.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1705/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1705.diff", "html_url": "https://github.com/huggingface/datasets/pull/1705", "merged_at": "2021-01-12T14:08:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/1705.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1705" }
true
https://api.github.com/repos/huggingface/datasets/issues/1704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1704/comments
https://api.github.com/repos/huggingface/datasets/issues/1704/events
https://github.com/huggingface/datasets/pull/1704
781,402,757
MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1
1,704
Update XSUM Factuality DatasetCard
{ "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vineeths96", "id": 50873201, "login": "vineeths96", "node_id": "MDQ6VXNlcjUwODczMjAx", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "repos_url": "https://api.github.com/users/vineeths96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "type": "User", "url": "https://api.github.com/users/vineeths96" }
[]
closed
false
null
[]
null
[]
"2021-01-07T15:37:14Z"
"2021-01-12T13:30:04Z"
"2021-01-12T13:30:04Z"
CONTRIBUTOR
null
Update XSUM Factuality DatasetCard
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1704/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1704.diff", "html_url": "https://github.com/huggingface/datasets/pull/1704", "merged_at": "2021-01-12T13:30:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/1704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1704" }
true
https://api.github.com/repos/huggingface/datasets/issues/1703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1703/comments
https://api.github.com/repos/huggingface/datasets/issues/1703/events
https://github.com/huggingface/datasets/pull/1703
781,395,146
MDExOlB1bGxSZXF1ZXN0NTUxMTI2MjA5
1,703
Improvements regarding caching and fingerprinting
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I few comments here for discussion:\r\n- I'm not convinced yet the end user should really have to understand the difference between \"caching\" and 'fingerprinting\", what do you think? I think fingerprinting should probably stay as an internal thing. Is there a case where we want cahing without fingerprinting or vice-versa?\r\n- while I think the random fingerprint mechanism is smart, I have one question: when we disable caching or fingerprinting we also probably don't want the disk usage to grow so we should then try to keep only one cache file. Is it the case currently?\r\n- the warning should be emitted only once per session if possible (we have a mechanism to do that in transformers, you should ask Lysandre/Sylvain)\r\n\r\n", "About your points:\r\n- Yes I agree, I just wanted to bring the discussion on this point. Until now fingerprinting hasn't been blocking for user experience. I'll probably remove the enable/disable fingerprinting function to keep things simple from the user's perspective.\r\n- Right now every time a not in-place transform (i.e. map, filter) is applied, a new cache file is created. It is the case even if caching is disabled since disabling it only means that the cache file won't be reloaded. Therefore you're right that it might end up filling the disk with files that won't be reused. I like the idea of keeping only one cache file. Currently all the cache files are kept on disk until the user clears the cache. To be able to keep only one, we need to know if a dataset that has been transformed is still loaded or not. For example\r\n```python\r\n# case 1 - keep both cache files (dataset1 and dataset2)\r\ndataset2 = dataset1.map(...)\r\n# case 2 - keep only the new cache file\r\ndataset1 = dataset1.map(...)\r\n```\r\nIn python it doesn't seem trivial to detect such changes. One thing that we can actually do on the other hand is store the cache files in a temporary directory that is cleared when the session closes. I think that's a good a simple solution for this problem.\r\n- Yes good idea ! I don't like spam either :) ", "> * To be able to keep only one, we need to know if a dataset that has been transformed is still loaded or not. For example\r\n> \r\n> ```python\r\n> # case 1 - keep both cache files (dataset1 and dataset2)\r\n> dataset2 = dataset1.map(...)\r\n> # case 2 - keep only the new cache file\r\n> dataset1 = dataset1.map(...)\r\n> ```\r\n\r\nI see what you mean. It's a tricky question. One option would be that if caching is deactivated we have a single memory mapped file and have copy act as a copy by reference instead of a copy by value. We will then probably want a `copy()` or `deepcopy()` functionality. Maybe we should think a little bit about it though.", "- I like the idea of using a temporary directory per session!\r\n- If the default behavior when caching is disabled is to re-use the same file, I'm a little worried about people making mistakes and having to re-download and process from scratch.\r\n- So we already have a keyword argument for `dataset1 = dataset1.map(..., in_place=True)`?", "> * If the default behavior when caching is disabled is to re-use the same file, I'm a little worried about people making mistakes and having to re-download and process from scratch.\r\n\r\nWe should distinguish between the caching from load_dataset (base dataset cache files) and the caching after dataset transforms such as map or filter (transformed dataset cache files). When disabling caching only the second type (for map and filter) doesn't reload from cache files.\r\nTherefore nothing is re-downloaded. To re-download the dataset entirely the argument `download_mode=\"force_redownload\"` must be used in `load_dataset`.\r\nDo we have to think more about the naming to make things less confusing in your opinion ?\r\n\r\n> * So we already have a keyword argument for `dataset1 = dataset1.map(..., in_place=True)`?\r\n\r\nThere's no such `in_place` parameter in map, what do you mean exactly ?", "I updated the PR:\r\n- I removed the enable/disable fingerprinting function\r\n- if caching is disabled arrow files are written in a temporary directory that is deleted when session closes\r\n- the warning that is showed when hashing a transform fails is only showed once\r\n- I added the `set_caching_enabled` function to the docs and explained the caching mechanism and its relation with fingerprinting\r\n\r\nI would love to have some feedback :) ", "> > * So we already have a keyword argument for `dataset1 = dataset1.map(..., in_place=True)`?\r\n> \r\n> There's no such `in_place` parameter in map, what do you mean exactly ?\r\n\r\nSorry, that wasn't clear at all. I was responding to your previous comment about case 1 / case 2. I don't think the behavior should depend on the command, but we could have:\r\n\r\n```\r\n# case 1 - keep both cache files (dataset1 and dataset2)\r\ndataset2 = dataset1.map(...)\r\n# case 2 - keep only the new cache file\r\ndataset1 = dataset1.map(..., in_place=True)\r\n```\r\n\r\nCase 1 returns a new reference using the new cache file, case 2 returns the same reference", "> Sorry, that wasn't clear at all. I was responding to your previous comment about case 1 / case 2. I don't think the behavior should depend on the command, but we could have:\r\n> \r\n> ```\r\n> # case 1 - keep both cache files (dataset1 and dataset2)\r\n> dataset2 = dataset1.map(...)\r\n> # case 2 - keep only the new cache file\r\n> dataset1 = dataset1.map(..., in_place=True)\r\n> ```\r\n> \r\n> Case 1 returns a new reference using the new cache file, case 2 returns the same reference\r\n\r\nOk I see !\r\n`in_place` is a parameter that is used in general to designate a transform so I would name that differently (maybe `overwrite` or something like that).\r\nNot sure if it's possible to update an already existing arrow file that is memory-mapped, let me check real quick.\r\nAlso it's possible to call `dataset2.cleanup_cache_files()` to delete the other cache files if we create a new one after the transform. Or even to get the cache file with `dataset1.cache_files` and let the user remove them by hand.\r\n\r\nEDIT: updating an arrow file in place is not part of the current API of pyarrow, so we would have to make new files.\r\n" ]
"2021-01-07T15:26:29Z"
"2021-01-19T17:32:11Z"
"2021-01-19T17:32:10Z"
MEMBER
null
This PR adds these features: - Enable/disable caching If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. It is equivalent to setting `load_from_cache` to `False` in dataset transforms. ```python from datasets import set_caching_enabled set_caching_enabled(False) ``` - Allow unpicklable functions in `map` If an unpicklable function is used, then it's not possible to hash it to update the dataset fingerprint that is used to name cache files. To workaround that, a random fingerprint is generated instead and a warning is raised. ```python logger.warning( f"Transform {transform} couldn't be hashed properly, a random hash was used instead. " "Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. " "If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything." ) ``` and also (open to discussion, EDIT: actually NOT included): - Enable/disable fingerprinting Fingerprinting allows to have one deterministic fingerprint per dataset state. A dataset fingerprint is updated after each transform. Re-running the same transforms on a dataset in a different session results in the same fingerprint. Disabling the fingerprinting mechanism makes all the fingerprints random. Since the caching mechanism uses fingerprints to name the cache files, then cache file names will be different. Therefore disabling fingerprinting will prevent the caching mechanism from reloading datasets files that have already been computed. Disabling fingerprinting may speed up the lib for users that don't care about this feature and don't want to use caching. ```python from datasets import set_fingerprinting_enabled set_fingerprinting_enabled(False) ``` Other details: - I renamed the `fingerprint` decorator to `fingerprint_transform` since the name was clearly not explicit. This decorator is used on dataset transform functions to allow them to update fingerprints. - I added some `ignore_kwargs` when decorating transforms with `fingerprint_transform`, to make the fingerprint update not sensible to kwargs like `load_from_cache` or `cache_file_name`. Todo: tests for set_fingerprinting_enabled + documentation for all the above features
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1703/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1703.diff", "html_url": "https://github.com/huggingface/datasets/pull/1703", "merged_at": "2021-01-19T17:32:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/1703.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1703" }
true
https://api.github.com/repos/huggingface/datasets/issues/1702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1702/comments
https://api.github.com/repos/huggingface/datasets/issues/1702/events
https://github.com/huggingface/datasets/pull/1702
781,383,277
MDExOlB1bGxSZXF1ZXN0NTUxMTE2NDc0
1,702
Fix importlib metdata import in py38
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-07T15:10:30Z"
"2021-01-08T10:47:15Z"
"2021-01-08T10:47:15Z"
MEMBER
null
In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1702/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1702/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1702.diff", "html_url": "https://github.com/huggingface/datasets/pull/1702", "merged_at": "2021-01-08T10:47:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/1702.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1702" }
true
https://api.github.com/repos/huggingface/datasets/issues/1701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1701/comments
https://api.github.com/repos/huggingface/datasets/issues/1701/events
https://github.com/huggingface/datasets/issues/1701
781,345,717
MDU6SXNzdWU3ODEzNDU3MTc=
1,701
Some datasets miss dataset_infos.json or dummy_data.zip
{ "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "events_url": "https://api.github.com/users/madlag/events{/privacy}", "followers_url": "https://api.github.com/users/madlag/followers", "following_url": "https://api.github.com/users/madlag/following{/other_user}", "gists_url": "https://api.github.com/users/madlag/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/madlag", "id": 272253, "login": "madlag", "node_id": "MDQ6VXNlcjI3MjI1Mw==", "organizations_url": "https://api.github.com/users/madlag/orgs", "received_events_url": "https://api.github.com/users/madlag/received_events", "repos_url": "https://api.github.com/users/madlag/repos", "site_admin": false, "starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madlag/subscriptions", "type": "User", "url": "https://api.github.com/users/madlag" }
[]
closed
false
null
[]
null
[ "Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n", "Closing since the dummy data generation is deprecated now (and the issue with missing metadata seems to be addressed)." ]
"2021-01-07T14:17:13Z"
"2022-11-04T15:11:16Z"
"2022-11-04T15:06:00Z"
CONTRIBUTOR
null
While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json : ``` c4 lm1b reclor wikihow ``` And some does not have a dummy_data.zip : ``` kor_nli math_dataset mlqa ms_marco newsgroup qa4mre qangaroo reddit_tifu super_glue trivia_qa web_of_science wmt14 wmt15 wmt16 wmt17 wmt18 wmt19 xtreme ``` But it seems that some of those last do have a "dummy" directory .
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1701/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1700/comments
https://api.github.com/repos/huggingface/datasets/issues/1700/events
https://github.com/huggingface/datasets/pull/1700
781,333,589
MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2
1,700
Update Curiosity dialogs DatasetCard
{ "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vineeths96", "id": 50873201, "login": "vineeths96", "node_id": "MDQ6VXNlcjUwODczMjAx", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "repos_url": "https://api.github.com/users/vineeths96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "type": "User", "url": "https://api.github.com/users/vineeths96" }
[]
closed
false
null
[]
null
[]
"2021-01-07T13:59:27Z"
"2021-01-12T18:51:32Z"
"2021-01-12T18:51:32Z"
CONTRIBUTOR
null
Update Curiosity dialogs DatasetCard There are some entries in the data fields section yet to be filled. There is little information regarding those fields.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1700/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1700.diff", "html_url": "https://github.com/huggingface/datasets/pull/1700", "merged_at": "2021-01-12T18:51:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1700.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1700" }
true
https://api.github.com/repos/huggingface/datasets/issues/1699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1699/comments
https://api.github.com/repos/huggingface/datasets/issues/1699/events
https://github.com/huggingface/datasets/pull/1699
781,271,558
MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5
1,699
Update DBRD dataset card and download URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}", "followers_url": "https://api.github.com/users/benjaminvdb/followers", "following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benjaminvdb", "id": 8875786, "login": "benjaminvdb", "node_id": "MDQ6VXNlcjg4NzU3ODY=", "organizations_url": "https://api.github.com/users/benjaminvdb/orgs", "received_events_url": "https://api.github.com/users/benjaminvdb/received_events", "repos_url": "https://api.github.com/users/benjaminvdb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions", "type": "User", "url": "https://api.github.com/users/benjaminvdb" }
[]
closed
false
null
[]
null
[ "not sure why the CI was not triggered though" ]
"2021-01-07T12:16:43Z"
"2021-01-07T13:41:39Z"
"2021-01-07T13:40:59Z"
CONTRIBUTOR
null
I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes: 1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316. 2. I've updated the dataset card. Cheers! 😄
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1699/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1699.diff", "html_url": "https://github.com/huggingface/datasets/pull/1699", "merged_at": "2021-01-07T13:40:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1699.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1699" }
true
https://api.github.com/repos/huggingface/datasets/issues/1698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1698/comments
https://api.github.com/repos/huggingface/datasets/issues/1698/events
https://github.com/huggingface/datasets/pull/1698
781,152,561
MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3
1,698
Update Coached Conv Pref DatasetCard
{ "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vineeths96", "id": 50873201, "login": "vineeths96", "node_id": "MDQ6VXNlcjUwODczMjAx", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "repos_url": "https://api.github.com/users/vineeths96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "type": "User", "url": "https://api.github.com/users/vineeths96" }
[]
closed
false
null
[]
null
[ "Really cool!\r\n\r\nCan you add some task tags for `dialogue-modeling` (under `sequence-modeling`) and `parsing` (under `structured-prediction`)?" ]
"2021-01-07T09:07:16Z"
"2021-01-08T17:04:33Z"
"2021-01-08T17:04:32Z"
CONTRIBUTOR
null
Update Coached Conversation Preferance DatasetCard
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1698/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1698/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1698.diff", "html_url": "https://github.com/huggingface/datasets/pull/1698", "merged_at": "2021-01-08T17:04:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1698.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1698" }
true
https://api.github.com/repos/huggingface/datasets/issues/1697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1697/comments
https://api.github.com/repos/huggingface/datasets/issues/1697/events
https://github.com/huggingface/datasets/pull/1697
781,126,579
MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5
1,697
Update DialogRE DatasetCard
{ "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vineeths96", "id": 50873201, "login": "vineeths96", "node_id": "MDQ6VXNlcjUwODczMjAx", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "repos_url": "https://api.github.com/users/vineeths96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "type": "User", "url": "https://api.github.com/users/vineeths96" }
[]
closed
false
null
[]
null
[ "Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?" ]
"2021-01-07T08:22:33Z"
"2021-01-07T13:34:28Z"
"2021-01-07T13:34:28Z"
CONTRIBUTOR
null
Update the information in the dataset card for the Dialog RE dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1697/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1697.diff", "html_url": "https://github.com/huggingface/datasets/pull/1697", "merged_at": "2021-01-07T13:34:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1697" }
true
https://api.github.com/repos/huggingface/datasets/issues/1696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1696/comments
https://api.github.com/repos/huggingface/datasets/issues/1696/events
https://github.com/huggingface/datasets/issues/1696
781,096,918
MDU6SXNzdWU3ODEwOTY5MTg=
1,696
Unable to install datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4", "events_url": "https://api.github.com/users/clee-dw/events{/privacy}", "followers_url": "https://api.github.com/users/clee-dw/followers", "following_url": "https://api.github.com/users/clee-dw/following{/other_user}", "gists_url": "https://api.github.com/users/clee-dw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clee-dw", "id": 12635475, "login": "clee-dw", "node_id": "MDQ6VXNlcjEyNjM1NDc1", "organizations_url": "https://api.github.com/users/clee-dw/orgs", "received_events_url": "https://api.github.com/users/clee-dw/received_events", "repos_url": "https://api.github.com/users/clee-dw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clee-dw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clee-dw/subscriptions", "type": "User", "url": "https://api.github.com/users/clee-dw" }
[]
closed
false
null
[]
null
[ "Maybe try to create a virtual env with python 3.8 or 3.7", "Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ", "Damn sorry", "Damn sorry" ]
"2021-01-07T07:24:37Z"
"2021-01-08T00:33:05Z"
"2021-01-07T22:06:05Z"
NONE
null
** Edit ** I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight! **Short description** I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). However, while I tried to download datasets using `pip install datasets` I got a massive error message after getting stuck at "Installing build dependencies..." I was wondering if this problem can be fixed by creating a virtual environment, but it didn't help. Can anyone offer some advice on how to fix this issue? Here's an error message: `(env) Gas-MacBook-Pro:Downloads destiny$ pip install datasets Collecting datasets Using cached datasets-1.2.0-py3-none-any.whl (159 kB) Collecting numpy>=1.17 Using cached numpy-1.19.5-cp39-cp39-macosx_10_9_x86_64.whl (15.6 MB) Collecting pyarrow>=0.17.1 Using cached pyarrow-2.0.0.tar.gz (58.9 MB) .... _configtest.c:9:5: warning: incompatible redeclaration of library function 'ceilf' [-Wincompatible-library-redeclaration] int ceilf (void); ^ _configtest.c:9:5: note: 'ceilf' is a builtin with type 'float (float)' _configtest.c:10:5: warning: incompatible redeclaration of library function 'rintf' [-Wincompatible-library-redeclaration] int rintf (void); ^ _configtest.c:10:5: note: 'rintf' is a builtin with type 'float (float)' _configtest.c:11:5: warning: incompatible redeclaration of library function 'truncf' [-Wincompatible-library-redeclaration] int truncf (void); ^ _configtest.c:11:5: note: 'truncf' is a builtin with type 'float (float)' _configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtf' [-Wincompatible-library-redeclaration] int sqrtf (void); ^ _configtest.c:12:5: note: 'sqrtf' is a builtin with type 'float (float)' _configtest.c:13:5: warning: incompatible redeclaration of library function 'log10f' [-Wincompatible-library-redeclaration] int log10f (void); ^ _configtest.c:13:5: note: 'log10f' is a builtin with type 'float (float)' _configtest.c:14:5: warning: incompatible redeclaration of library function 'logf' [-Wincompatible-library-redeclaration] int logf (void); ^ _configtest.c:14:5: note: 'logf' is a builtin with type 'float (float)' _configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pf' [-Wincompatible-library-redeclaration] int log1pf (void); ^ _configtest.c:15:5: note: 'log1pf' is a builtin with type 'float (float)' _configtest.c:16:5: warning: incompatible redeclaration of library function 'expf' [-Wincompatible-library-redeclaration] int expf (void); ^ _configtest.c:16:5: note: 'expf' is a builtin with type 'float (float)' _configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1f' [-Wincompatible-library-redeclaration] int expm1f (void); ^ _configtest.c:17:5: note: 'expm1f' is a builtin with type 'float (float)' _configtest.c:18:5: warning: incompatible redeclaration of library function 'asinf' [-Wincompatible-library-redeclaration] int asinf (void); ^ _configtest.c:18:5: note: 'asinf' is a builtin with type 'float (float)' _configtest.c:19:5: warning: incompatible redeclaration of library function 'acosf' [-Wincompatible-library-redeclaration] int acosf (void); ^ _configtest.c:19:5: note: 'acosf' is a builtin with type 'float (float)' _configtest.c:20:5: warning: incompatible redeclaration of library function 'atanf' [-Wincompatible-library-redeclaration] int atanf (void); ^ _configtest.c:20:5: note: 'atanf' is a builtin with type 'float (float)' _configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhf' [-Wincompatible-library-redeclaration] int asinhf (void); ^ _configtest.c:21:5: note: 'asinhf' is a builtin with type 'float (float)' _configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshf' [-Wincompatible-library-redeclaration] int acoshf (void); ^ _configtest.c:22:5: note: 'acoshf' is a builtin with type 'float (float)' _configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhf' [-Wincompatible-library-redeclaration] int atanhf (void); ^ _configtest.c:23:5: note: 'atanhf' is a builtin with type 'float (float)' _configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotf' [-Wincompatible-library-redeclaration] int hypotf (void); ^ _configtest.c:24:5: note: 'hypotf' is a builtin with type 'float (float, float)' _configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2f' [-Wincompatible-library-redeclaration] int atan2f (void); ^ _configtest.c:25:5: note: 'atan2f' is a builtin with type 'float (float, float)' _configtest.c:26:5: warning: incompatible redeclaration of library function 'powf' [-Wincompatible-library-redeclaration] int powf (void); ^ _configtest.c:26:5: note: 'powf' is a builtin with type 'float (float, float)' _configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodf' [-Wincompatible-library-redeclaration] int fmodf (void); ^ _configtest.c:27:5: note: 'fmodf' is a builtin with type 'float (float, float)' _configtest.c:28:5: warning: incompatible redeclaration of library function 'modff' [-Wincompatible-library-redeclaration] int modff (void); ^ _configtest.c:28:5: note: 'modff' is a builtin with type 'float (float, float *)' _configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpf' [-Wincompatible-library-redeclaration] int frexpf (void); ^ _configtest.c:29:5: note: 'frexpf' is a builtin with type 'float (float, int *)' _configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpf' [-Wincompatible-library-redeclaration] int ldexpf (void); ^ _configtest.c:30:5: note: 'ldexpf' is a builtin with type 'float (float, int)' _configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2f' [-Wincompatible-library-redeclaration] int exp2f (void); ^ _configtest.c:31:5: note: 'exp2f' is a builtin with type 'float (float)' _configtest.c:32:5: warning: incompatible redeclaration of library function 'log2f' [-Wincompatible-library-redeclaration] int log2f (void); ^ _configtest.c:32:5: note: 'log2f' is a builtin with type 'float (float)' _configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignf' [-Wincompatible-library-redeclaration] int copysignf (void); ^ _configtest.c:33:5: note: 'copysignf' is a builtin with type 'float (float, float)' _configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterf' [-Wincompatible-library-redeclaration] int nextafterf (void); ^ _configtest.c:34:5: note: 'nextafterf' is a builtin with type 'float (float, float)' _configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtf' [-Wincompatible-library-redeclaration] int cbrtf (void); ^ _configtest.c:35:5: note: 'cbrtf' is a builtin with type 'float (float)' 35 warnings generated. clang _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest.o.d _configtest C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:1:5: warning: incompatible redeclaration of library function 'sinl' [-Wincompatible-library-redeclaration] int sinl (void); ^ _configtest.c:1:5: note: 'sinl' is a builtin with type 'long double (long double)' _configtest.c:2:5: warning: incompatible redeclaration of library function 'cosl' [-Wincompatible-library-redeclaration] int cosl (void); ^ _configtest.c:2:5: note: 'cosl' is a builtin with type 'long double (long double)' _configtest.c:3:5: warning: incompatible redeclaration of library function 'tanl' [-Wincompatible-library-redeclaration] int tanl (void); ^ _configtest.c:3:5: note: 'tanl' is a builtin with type 'long double (long double)' _configtest.c:4:5: warning: incompatible redeclaration of library function 'sinhl' [-Wincompatible-library-redeclaration] int sinhl (void); ^ _configtest.c:4:5: note: 'sinhl' is a builtin with type 'long double (long double)' _configtest.c:5:5: warning: incompatible redeclaration of library function 'coshl' [-Wincompatible-library-redeclaration] int coshl (void); ^ _configtest.c:5:5: note: 'coshl' is a builtin with type 'long double (long double)' _configtest.c:6:5: warning: incompatible redeclaration of library function 'tanhl' [-Wincompatible-library-redeclaration] int tanhl (void); ^ _configtest.c:6:5: note: 'tanhl' is a builtin with type 'long double (long double)' _configtest.c:7:5: warning: incompatible redeclaration of library function 'fabsl' [-Wincompatible-library-redeclaration] int fabsl (void); ^ _configtest.c:7:5: note: 'fabsl' is a builtin with type 'long double (long double)' _configtest.c:8:5: warning: incompatible redeclaration of library function 'floorl' [-Wincompatible-library-redeclaration] int floorl (void); ^ _configtest.c:8:5: note: 'floorl' is a builtin with type 'long double (long double)' _configtest.c:9:5: warning: incompatible redeclaration of library function 'ceill' [-Wincompatible-library-redeclaration] int ceill (void); ^ _configtest.c:9:5: note: 'ceill' is a builtin with type 'long double (long double)' _configtest.c:10:5: warning: incompatible redeclaration of library function 'rintl' [-Wincompatible-library-redeclaration] int rintl (void); ^ _configtest.c:10:5: note: 'rintl' is a builtin with type 'long double (long double)' _configtest.c:11:5: warning: incompatible redeclaration of library function 'truncl' [-Wincompatible-library-redeclaration] int truncl (void); ^ _configtest.c:11:5: note: 'truncl' is a builtin with type 'long double (long double)' _configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtl' [-Wincompatible-library-redeclaration] int sqrtl (void); ^ _configtest.c:12:5: note: 'sqrtl' is a builtin with type 'long double (long double)' _configtest.c:13:5: warning: incompatible redeclaration of library function 'log10l' [-Wincompatible-library-redeclaration] int log10l (void); ^ _configtest.c:13:5: note: 'log10l' is a builtin with type 'long double (long double)' _configtest.c:14:5: warning: incompatible redeclaration of library function 'logl' [-Wincompatible-library-redeclaration] int logl (void); ^ _configtest.c:14:5: note: 'logl' is a builtin with type 'long double (long double)' _configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pl' [-Wincompatible-library-redeclaration] int log1pl (void); ^ _configtest.c:15:5: note: 'log1pl' is a builtin with type 'long double (long double)' _configtest.c:16:5: warning: incompatible redeclaration of library function 'expl' [-Wincompatible-library-redeclaration] int expl (void); ^ _configtest.c:16:5: note: 'expl' is a builtin with type 'long double (long double)' _configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1l' [-Wincompatible-library-redeclaration] int expm1l (void); ^ _configtest.c:17:5: note: 'expm1l' is a builtin with type 'long double (long double)' _configtest.c:18:5: warning: incompatible redeclaration of library function 'asinl' [-Wincompatible-library-redeclaration] int asinl (void); ^ _configtest.c:18:5: note: 'asinl' is a builtin with type 'long double (long double)' _configtest.c:19:5: warning: incompatible redeclaration of library function 'acosl' [-Wincompatible-library-redeclaration] int acosl (void); ^ _configtest.c:19:5: note: 'acosl' is a builtin with type 'long double (long double)' _configtest.c:20:5: warning: incompatible redeclaration of library function 'atanl' [-Wincompatible-library-redeclaration] int atanl (void); ^ _configtest.c:20:5: note: 'atanl' is a builtin with type 'long double (long double)' _configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhl' [-Wincompatible-library-redeclaration] int asinhl (void); ^ _configtest.c:21:5: note: 'asinhl' is a builtin with type 'long double (long double)' _configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshl' [-Wincompatible-library-redeclaration] int acoshl (void); ^ _configtest.c:22:5: note: 'acoshl' is a builtin with type 'long double (long double)' _configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhl' [-Wincompatible-library-redeclaration] int atanhl (void); ^ _configtest.c:23:5: note: 'atanhl' is a builtin with type 'long double (long double)' _configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotl' [-Wincompatible-library-redeclaration] int hypotl (void); ^ _configtest.c:24:5: note: 'hypotl' is a builtin with type 'long double (long double, long double)' _configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2l' [-Wincompatible-library-redeclaration] int atan2l (void); ^ _configtest.c:25:5: note: 'atan2l' is a builtin with type 'long double (long double, long double)' _configtest.c:26:5: warning: incompatible redeclaration of library function 'powl' [-Wincompatible-library-redeclaration] int powl (void); ^ _configtest.c:26:5: note: 'powl' is a builtin with type 'long double (long double, long double)' _configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodl' [-Wincompatible-library-redeclaration] int fmodl (void); ^ _configtest.c:27:5: note: 'fmodl' is a builtin with type 'long double (long double, long double)' _configtest.c:28:5: warning: incompatible redeclaration of library function 'modfl' [-Wincompatible-library-redeclaration] int modfl (void); ^ _configtest.c:28:5: note: 'modfl' is a builtin with type 'long double (long double, long double *)' _configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpl' [-Wincompatible-library-redeclaration] int frexpl (void); ^ _configtest.c:29:5: note: 'frexpl' is a builtin with type 'long double (long double, int *)' _configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpl' [-Wincompatible-library-redeclaration] int ldexpl (void); ^ _configtest.c:30:5: note: 'ldexpl' is a builtin with type 'long double (long double, int)' _configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2l' [-Wincompatible-library-redeclaration] int exp2l (void); ^ _configtest.c:31:5: note: 'exp2l' is a builtin with type 'long double (long double)' _configtest.c:32:5: warning: incompatible redeclaration of library function 'log2l' [-Wincompatible-library-redeclaration] int log2l (void); ^ _configtest.c:32:5: note: 'log2l' is a builtin with type 'long double (long double)' _configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignl' [-Wincompatible-library-redeclaration] int copysignl (void); ^ _configtest.c:33:5: note: 'copysignl' is a builtin with type 'long double (long double, long double)' _configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterl' [-Wincompatible-library-redeclaration] int nextafterl (void); ^ _configtest.c:34:5: note: 'nextafterl' is a builtin with type 'long double (long double, long double)' _configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtl' [-Wincompatible-library-redeclaration] int cbrtl (void); ^ _configtest.c:35:5: note: 'cbrtl' is a builtin with type 'long double (long double)' 35 warnings generated. clang _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest.o.d _configtest C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:8:12: error: use of undeclared identifier 'HAVE_DECL_SIGNBIT' (void) HAVE_DECL_SIGNBIT; ^ 1 error generated. failure. removing: _configtest.c _configtest.o C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:1:5: warning: incompatible redeclaration of library function 'cabs' [-Wincompatible-library-redeclaration] int cabs (void); ^ _configtest.c:1:5: note: 'cabs' is a builtin with type 'double (_Complex double)' _configtest.c:2:5: warning: incompatible redeclaration of library function 'cacos' [-Wincompatible-library-redeclaration] int cacos (void); ^ _configtest.c:2:5: note: 'cacos' is a builtin with type '_Complex double (_Complex double)' _configtest.c:3:5: warning: incompatible redeclaration of library function 'cacosh' [-Wincompatible-library-redeclaration] int cacosh (void); ^ _configtest.c:3:5: note: 'cacosh' is a builtin with type '_Complex double (_Complex double)' _configtest.c:4:5: warning: incompatible redeclaration of library function 'carg' [-Wincompatible-library-redeclaration] int carg (void); ^ _configtest.c:4:5: note: 'carg' is a builtin with type 'double (_Complex double)' _configtest.c:5:5: warning: incompatible redeclaration of library function 'casin' [-Wincompatible-library-redeclaration] int casin (void); ^ _configtest.c:5:5: note: 'casin' is a builtin with type '_Complex double (_Complex double)' _configtest.c:6:5: warning: incompatible redeclaration of library function 'casinh' [-Wincompatible-library-redeclaration] int casinh (void); ^ _configtest.c:6:5: note: 'casinh' is a builtin with type '_Complex double (_Complex double)' _configtest.c:7:5: warning: incompatible redeclaration of library function 'catan' [-Wincompatible-library-redeclaration] int catan (void); ^ _configtest.c:7:5: note: 'catan' is a builtin with type '_Complex double (_Complex double)' _configtest.c:8:5: warning: incompatible redeclaration of library function 'catanh' [-Wincompatible-library-redeclaration] int catanh (void); ^ _configtest.c:8:5: note: 'catanh' is a builtin with type '_Complex double (_Complex double)' _configtest.c:9:5: warning: incompatible redeclaration of library function 'ccos' [-Wincompatible-library-redeclaration] int ccos (void); ^ _configtest.c:9:5: note: 'ccos' is a builtin with type '_Complex double (_Complex double)' _configtest.c:10:5: warning: incompatible redeclaration of library function 'ccosh' [-Wincompatible-library-redeclaration] int ccosh (void); ^ _configtest.c:10:5: note: 'ccosh' is a builtin with type '_Complex double (_Complex double)' _configtest.c:11:5: warning: incompatible redeclaration of library function 'cexp' [-Wincompatible-library-redeclaration] int cexp (void); ^ _configtest.c:11:5: note: 'cexp' is a builtin with type '_Complex double (_Complex double)' _configtest.c:12:5: warning: incompatible redeclaration of library function 'cimag' [-Wincompatible-library-redeclaration] int cimag (void); ^ _configtest.c:12:5: note: 'cimag' is a builtin with type 'double (_Complex double)' _configtest.c:13:5: warning: incompatible redeclaration of library function 'clog' [-Wincompatible-library-redeclaration] int clog (void); ^ _configtest.c:13:5: note: 'clog' is a builtin with type '_Complex double (_Complex double)' _configtest.c:14:5: warning: incompatible redeclaration of library function 'conj' [-Wincompatible-library-redeclaration] int conj (void); ^ _configtest.c:14:5: note: 'conj' is a builtin with type '_Complex double (_Complex double)' _configtest.c:15:5: warning: incompatible redeclaration of library function 'cpow' [-Wincompatible-library-redeclaration] int cpow (void); ^ _configtest.c:15:5: note: 'cpow' is a builtin with type '_Complex double (_Complex double, _Complex double)' _configtest.c:16:5: warning: incompatible redeclaration of library function 'cproj' [-Wincompatible-library-redeclaration] int cproj (void); ^ _configtest.c:16:5: note: 'cproj' is a builtin with type '_Complex double (_Complex double)' _configtest.c:17:5: warning: incompatible redeclaration of library function 'creal' [-Wincompatible-library-redeclaration] int creal (void); ^ _configtest.c:17:5: note: 'creal' is a builtin with type 'double (_Complex double)' _configtest.c:18:5: warning: incompatible redeclaration of library function 'csin' [-Wincompatible-library-redeclaration] int csin (void); ^ _configtest.c:18:5: note: 'csin' is a builtin with type '_Complex double (_Complex double)' _configtest.c:19:5: warning: incompatible redeclaration of library function 'csinh' [-Wincompatible-library-redeclaration] int csinh (void); ^ _configtest.c:19:5: note: 'csinh' is a builtin with type '_Complex double (_Complex double)' _configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrt' [-Wincompatible-library-redeclaration] int csqrt (void); ^ _configtest.c:20:5: note: 'csqrt' is a builtin with type '_Complex double (_Complex double)' _configtest.c:21:5: warning: incompatible redeclaration of library function 'ctan' [-Wincompatible-library-redeclaration] int ctan (void); ^ _configtest.c:21:5: note: 'ctan' is a builtin with type '_Complex double (_Complex double)' _configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanh' [-Wincompatible-library-redeclaration] int ctanh (void); ^ _configtest.c:22:5: note: 'ctanh' is a builtin with type '_Complex double (_Complex double)' 22 warnings generated. clang _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest.o.d _configtest C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsf' [-Wincompatible-library-redeclaration] int cabsf (void); ^ _configtest.c:1:5: note: 'cabsf' is a builtin with type 'float (_Complex float)' _configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosf' [-Wincompatible-library-redeclaration] int cacosf (void); ^ _configtest.c:2:5: note: 'cacosf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshf' [-Wincompatible-library-redeclaration] int cacoshf (void); ^ _configtest.c:3:5: note: 'cacoshf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:4:5: warning: incompatible redeclaration of library function 'cargf' [-Wincompatible-library-redeclaration] int cargf (void); ^ _configtest.c:4:5: note: 'cargf' is a builtin with type 'float (_Complex float)' _configtest.c:5:5: warning: incompatible redeclaration of library function 'casinf' [-Wincompatible-library-redeclaration] int casinf (void); ^ _configtest.c:5:5: note: 'casinf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhf' [-Wincompatible-library-redeclaration] int casinhf (void); ^ _configtest.c:6:5: note: 'casinhf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:7:5: warning: incompatible redeclaration of library function 'catanf' [-Wincompatible-library-redeclaration] int catanf (void); ^ _configtest.c:7:5: note: 'catanf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhf' [-Wincompatible-library-redeclaration] int catanhf (void); ^ _configtest.c:8:5: note: 'catanhf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosf' [-Wincompatible-library-redeclaration] int ccosf (void); ^ _configtest.c:9:5: note: 'ccosf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshf' [-Wincompatible-library-redeclaration] int ccoshf (void); ^ _configtest.c:10:5: note: 'ccoshf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpf' [-Wincompatible-library-redeclaration] int cexpf (void); ^ _configtest.c:11:5: note: 'cexpf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagf' [-Wincompatible-library-redeclaration] int cimagf (void); ^ _configtest.c:12:5: note: 'cimagf' is a builtin with type 'float (_Complex float)' _configtest.c:13:5: warning: incompatible redeclaration of library function 'clogf' [-Wincompatible-library-redeclaration] int clogf (void); ^ _configtest.c:13:5: note: 'clogf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:14:5: warning: incompatible redeclaration of library function 'conjf' [-Wincompatible-library-redeclaration] int conjf (void); ^ _configtest.c:14:5: note: 'conjf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowf' [-Wincompatible-library-redeclaration] int cpowf (void); ^ _configtest.c:15:5: note: 'cpowf' is a builtin with type '_Complex float (_Complex float, _Complex float)' _configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojf' [-Wincompatible-library-redeclaration] int cprojf (void); ^ _configtest.c:16:5: note: 'cprojf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:17:5: warning: incompatible redeclaration of library function 'crealf' [-Wincompatible-library-redeclaration] int crealf (void); ^ _configtest.c:17:5: note: 'crealf' is a builtin with type 'float (_Complex float)' _configtest.c:18:5: warning: incompatible redeclaration of library function 'csinf' [-Wincompatible-library-redeclaration] int csinf (void); ^ _configtest.c:18:5: note: 'csinf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhf' [-Wincompatible-library-redeclaration] int csinhf (void); ^ _configtest.c:19:5: note: 'csinhf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtf' [-Wincompatible-library-redeclaration] int csqrtf (void); ^ _configtest.c:20:5: note: 'csqrtf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanf' [-Wincompatible-library-redeclaration] int ctanf (void); ^ _configtest.c:21:5: note: 'ctanf' is a builtin with type '_Complex float (_Complex float)' _configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhf' [-Wincompatible-library-redeclaration] int ctanhf (void); ^ _configtest.c:22:5: note: 'ctanhf' is a builtin with type '_Complex float (_Complex float)' 22 warnings generated. clang _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest.o.d _configtest C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsl' [-Wincompatible-library-redeclaration] int cabsl (void); ^ _configtest.c:1:5: note: 'cabsl' is a builtin with type 'long double (_Complex long double)' _configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosl' [-Wincompatible-library-redeclaration] int cacosl (void); ^ _configtest.c:2:5: note: 'cacosl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshl' [-Wincompatible-library-redeclaration] int cacoshl (void); ^ _configtest.c:3:5: note: 'cacoshl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:4:5: warning: incompatible redeclaration of library function 'cargl' [-Wincompatible-library-redeclaration] int cargl (void); ^ _configtest.c:4:5: note: 'cargl' is a builtin with type 'long double (_Complex long double)' _configtest.c:5:5: warning: incompatible redeclaration of library function 'casinl' [-Wincompatible-library-redeclaration] int casinl (void); ^ _configtest.c:5:5: note: 'casinl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhl' [-Wincompatible-library-redeclaration] int casinhl (void); ^ _configtest.c:6:5: note: 'casinhl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:7:5: warning: incompatible redeclaration of library function 'catanl' [-Wincompatible-library-redeclaration] int catanl (void); ^ _configtest.c:7:5: note: 'catanl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhl' [-Wincompatible-library-redeclaration] int catanhl (void); ^ _configtest.c:8:5: note: 'catanhl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosl' [-Wincompatible-library-redeclaration] int ccosl (void); ^ _configtest.c:9:5: note: 'ccosl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshl' [-Wincompatible-library-redeclaration] int ccoshl (void); ^ _configtest.c:10:5: note: 'ccoshl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpl' [-Wincompatible-library-redeclaration] int cexpl (void); ^ _configtest.c:11:5: note: 'cexpl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagl' [-Wincompatible-library-redeclaration] int cimagl (void); ^ _configtest.c:12:5: note: 'cimagl' is a builtin with type 'long double (_Complex long double)' _configtest.c:13:5: warning: incompatible redeclaration of library function 'clogl' [-Wincompatible-library-redeclaration] int clogl (void); ^ _configtest.c:13:5: note: 'clogl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:14:5: warning: incompatible redeclaration of library function 'conjl' [-Wincompatible-library-redeclaration] int conjl (void); ^ _configtest.c:14:5: note: 'conjl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration] int cpowl (void); ^ _configtest.c:15:5: note: 'cpowl' is a builtin with type '_Complex long double (_Complex long double, _Complex long double)' _configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojl' [-Wincompatible-library-redeclaration] int cprojl (void); ^ _configtest.c:16:5: note: 'cprojl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:17:5: warning: incompatible redeclaration of library function 'creall' [-Wincompatible-library-redeclaration] int creall (void); ^ _configtest.c:17:5: note: 'creall' is a builtin with type 'long double (_Complex long double)' _configtest.c:18:5: warning: incompatible redeclaration of library function 'csinl' [-Wincompatible-library-redeclaration] int csinl (void); ^ _configtest.c:18:5: note: 'csinl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhl' [-Wincompatible-library-redeclaration] int csinhl (void); ^ _configtest.c:19:5: note: 'csinhl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtl' [-Wincompatible-library-redeclaration] int csqrtl (void); ^ _configtest.c:20:5: note: 'csqrtl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanl' [-Wincompatible-library-redeclaration] int ctanl (void); ^ _configtest.c:21:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)' _configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhl' [-Wincompatible-library-redeclaration] int ctanhl (void); ^ _configtest.c:22:5: note: 'ctanhl' is a builtin with type '_Complex long double (_Complex long double)' 22 warnings generated. clang _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest.o.d _configtest C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:2:12: warning: unused function 'static_func' [-Wunused-function] static int static_func (char * restrict a) ^ 1 warning generated. success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:3:19: warning: unused function 'static_func' [-Wunused-function] static inline int static_func (void) ^ 1 warning generated. success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c removing: _configtest.c _configtest.o _configtest.o.d File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h #define SIZEOF_PY_INTPTR_T 8 #define SIZEOF_OFF_T 8 #define SIZEOF_PY_LONG_LONG 8 #define MATHLIB #define HAVE_SIN 1 #define HAVE_COS 1 #define HAVE_TAN 1 #define HAVE_SINH 1 #define HAVE_COSH 1 #define HAVE_TANH 1 #define HAVE_FABS 1 #define HAVE_FLOOR 1 #define HAVE_CEIL 1 #define HAVE_SQRT 1 #define HAVE_LOG10 1 #define HAVE_LOG 1 #define HAVE_EXP 1 #define HAVE_ASIN 1 #define HAVE_ACOS 1 #define HAVE_ATAN 1 #define HAVE_FMOD 1 #define HAVE_MODF 1 #define HAVE_FREXP 1 #define HAVE_LDEXP 1 #define HAVE_RINT 1 #define HAVE_TRUNC 1 #define HAVE_EXP2 1 #define HAVE_LOG2 1 #define HAVE_ATAN2 1 #define HAVE_POW 1 #define HAVE_NEXTAFTER 1 #define HAVE_STRTOLL 1 #define HAVE_STRTOULL 1 #define HAVE_CBRT 1 #define HAVE_STRTOLD_L 1 #define HAVE_BACKTRACE 1 #define HAVE_MADVISE 1 #define HAVE_XMMINTRIN_H 1 #define HAVE_EMMINTRIN_H 1 #define HAVE_XLOCALE_H 1 #define HAVE_DLFCN_H 1 #define HAVE_SYS_MMAN_H 1 #define HAVE___BUILTIN_ISNAN 1 #define HAVE___BUILTIN_ISINF 1 #define HAVE___BUILTIN_ISFINITE 1 #define HAVE___BUILTIN_BSWAP32 1 #define HAVE___BUILTIN_BSWAP64 1 #define HAVE___BUILTIN_EXPECT 1 #define HAVE___BUILTIN_MUL_OVERFLOW 1 #define HAVE___BUILTIN_CPU_SUPPORTS 1 #define HAVE__M_FROM_INT64 1 #define HAVE__MM_LOAD_PS 1 #define HAVE__MM_PREFETCH 1 #define HAVE__MM_LOAD_PD 1 #define HAVE___BUILTIN_PREFETCH 1 #define HAVE_LINK_AVX 1 #define HAVE_LINK_AVX2 1 #define HAVE_XGETBV 1 #define HAVE_ATTRIBUTE_NONNULL 1 #define HAVE_ATTRIBUTE_TARGET_AVX 1 #define HAVE_ATTRIBUTE_TARGET_AVX2 1 #define HAVE___THREAD 1 #define HAVE_SINF 1 #define HAVE_COSF 1 #define HAVE_TANF 1 #define HAVE_SINHF 1 #define HAVE_COSHF 1 #define HAVE_TANHF 1 #define HAVE_FABSF 1 #define HAVE_FLOORF 1 #define HAVE_CEILF 1 #define HAVE_RINTF 1 #define HAVE_TRUNCF 1 #define HAVE_SQRTF 1 #define HAVE_LOG10F 1 #define HAVE_LOGF 1 #define HAVE_LOG1PF 1 #define HAVE_EXPF 1 #define HAVE_EXPM1F 1 #define HAVE_ASINF 1 #define HAVE_ACOSF 1 #define HAVE_ATANF 1 #define HAVE_ASINHF 1 #define HAVE_ACOSHF 1 #define HAVE_ATANHF 1 #define HAVE_HYPOTF 1 #define HAVE_ATAN2F 1 #define HAVE_POWF 1 #define HAVE_FMODF 1 #define HAVE_MODFF 1 #define HAVE_FREXPF 1 #define HAVE_LDEXPF 1 #define HAVE_EXP2F 1 #define HAVE_LOG2F 1 #define HAVE_COPYSIGNF 1 #define HAVE_NEXTAFTERF 1 #define HAVE_CBRTF 1 #define HAVE_SINL 1 #define HAVE_COSL 1 #define HAVE_TANL 1 #define HAVE_SINHL 1 #define HAVE_COSHL 1 #define HAVE_TANHL 1 #define HAVE_FABSL 1 #define HAVE_FLOORL 1 #define HAVE_CEILL 1 #define HAVE_RINTL 1 #define HAVE_TRUNCL 1 #define HAVE_SQRTL 1 #define HAVE_LOG10L 1 #define HAVE_LOGL 1 #define HAVE_LOG1PL 1 #define HAVE_EXPL 1 #define HAVE_EXPM1L 1 #define HAVE_ASINL 1 #define HAVE_ACOSL 1 #define HAVE_ATANL 1 #define HAVE_ASINHL 1 #define HAVE_ACOSHL 1 #define HAVE_ATANHL 1 #define HAVE_HYPOTL 1 #define HAVE_ATAN2L 1 #define HAVE_POWL 1 #define HAVE_FMODL 1 #define HAVE_MODFL 1 #define HAVE_FREXPL 1 #define HAVE_LDEXPL 1 #define HAVE_EXP2L 1 #define HAVE_LOG2L 1 #define HAVE_COPYSIGNL 1 #define HAVE_NEXTAFTERL 1 #define HAVE_CBRTL 1 #define HAVE_DECL_SIGNBIT #define HAVE_COMPLEX_H 1 #define HAVE_CABS 1 #define HAVE_CACOS 1 #define HAVE_CACOSH 1 #define HAVE_CARG 1 #define HAVE_CASIN 1 #define HAVE_CASINH 1 #define HAVE_CATAN 1 #define HAVE_CATANH 1 #define HAVE_CCOS 1 #define HAVE_CCOSH 1 #define HAVE_CEXP 1 #define HAVE_CIMAG 1 #define HAVE_CLOG 1 #define HAVE_CONJ 1 #define HAVE_CPOW 1 #define HAVE_CPROJ 1 #define HAVE_CREAL 1 #define HAVE_CSIN 1 #define HAVE_CSINH 1 #define HAVE_CSQRT 1 #define HAVE_CTAN 1 #define HAVE_CTANH 1 #define HAVE_CABSF 1 #define HAVE_CACOSF 1 #define HAVE_CACOSHF 1 #define HAVE_CARGF 1 #define HAVE_CASINF 1 #define HAVE_CASINHF 1 #define HAVE_CATANF 1 #define HAVE_CATANHF 1 #define HAVE_CCOSF 1 #define HAVE_CCOSHF 1 #define HAVE_CEXPF 1 #define HAVE_CIMAGF 1 #define HAVE_CLOGF 1 #define HAVE_CONJF 1 #define HAVE_CPOWF 1 #define HAVE_CPROJF 1 #define HAVE_CREALF 1 #define HAVE_CSINF 1 #define HAVE_CSINHF 1 #define HAVE_CSQRTF 1 #define HAVE_CTANF 1 #define HAVE_CTANHF 1 #define HAVE_CABSL 1 #define HAVE_CACOSL 1 #define HAVE_CACOSHL 1 #define HAVE_CARGL 1 #define HAVE_CASINL 1 #define HAVE_CASINHL 1 #define HAVE_CATANL 1 #define HAVE_CATANHL 1 #define HAVE_CCOSL 1 #define HAVE_CCOSHL 1 #define HAVE_CEXPL 1 #define HAVE_CIMAGL 1 #define HAVE_CLOGL 1 #define HAVE_CONJL 1 #define HAVE_CPOWL 1 #define HAVE_CPROJL 1 #define HAVE_CREALL 1 #define HAVE_CSINL 1 #define HAVE_CSINHL 1 #define HAVE_CSQRTL 1 #define HAVE_CTANL 1 #define HAVE_CTANHL 1 #define NPY_RESTRICT restrict #define NPY_RELAXED_STRIDES_CHECKING 1 #define HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE 1 #define NPY_PY3K 1 #ifndef __cplusplus /* #undef inline */ #endif #ifndef _NPY_NPY_CONFIG_H_ #error config.h should never be included directly, include npy_config.h instead #endif EOF adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources. Generating build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c _configtest.c:1:5: warning: incompatible redeclaration of library function 'exp' [-Wincompatible-library-redeclaration] int exp (void); ^ _configtest.c:1:5: note: 'exp' is a builtin with type 'double (double)' 1 warning generated. clang _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest.o.d _configtest C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c' clang: _configtest.c success! removing: _configtest.c _configtest.o _configtest.o.d File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h #define NPY_SIZEOF_SHORT SIZEOF_SHORT #define NPY_SIZEOF_INT SIZEOF_INT #define NPY_SIZEOF_LONG SIZEOF_LONG #define NPY_SIZEOF_FLOAT 4 #define NPY_SIZEOF_COMPLEX_FLOAT 8 #define NPY_SIZEOF_DOUBLE 8 #define NPY_SIZEOF_COMPLEX_DOUBLE 16 #define NPY_SIZEOF_LONGDOUBLE 16 #define NPY_SIZEOF_COMPLEX_LONGDOUBLE 32 #define NPY_SIZEOF_PY_INTPTR_T 8 #define NPY_SIZEOF_OFF_T 8 #define NPY_SIZEOF_PY_LONG_LONG 8 #define NPY_SIZEOF_LONGLONG 8 #define NPY_NO_SMP 0 #define NPY_HAVE_DECL_ISNAN #define NPY_HAVE_DECL_ISINF #define NPY_HAVE_DECL_ISFINITE #define NPY_HAVE_DECL_SIGNBIT #define NPY_USE_C99_COMPLEX 1 #define NPY_HAVE_COMPLEX_DOUBLE 1 #define NPY_HAVE_COMPLEX_FLOAT 1 #define NPY_HAVE_COMPLEX_LONG_DOUBLE 1 #define NPY_RELAXED_STRIDES_CHECKING 1 #define NPY_USE_C99_FORMATS 1 #define NPY_VISIBILITY_HIDDEN __attribute__((visibility("hidden"))) #define NPY_ABI_VERSION 0x01000009 #define NPY_API_VERSION 0x0000000D #ifndef __STDC_FORMAT_MACROS #define __STDC_FORMAT_MACROS 1 #endif EOF adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources. executing numpy/core/code_generators/generate_numpy_api.py adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources. numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h'] building extension "numpy.core._multiarray_tests" sources creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c building extension "numpy.core._multiarray_umath" sources adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources. adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources. executing numpy/core/code_generators/generate_numpy_api.py adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h' to sources. conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath' to include_dirs. conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath' to include_dirs. conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common' to include_dirs. numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_internal.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h'] building extension "numpy.core._umath_tests" sources conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c building extension "numpy.core._rational_tests" sources conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c building extension "numpy.core._struct_ufunc_tests" sources conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c building extension "numpy.core._operand_flag_tests" sources conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources creating build/src.macosx-10.15-x86_64-3.9/numpy/linalg adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources. building extension "numpy.linalg._umath_linalg" sources adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources. conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c building extension "numpy.random.mtrand" sources creating build/src.macosx-10.15-x86_64-3.9/numpy/random building data_files sources build_src: building npy-pkg config files running build_py creating build/lib.macosx-10.15-x86_64-3.9 creating build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/conftest.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/_globals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/dual.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/_distributor_init.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/ctypeslib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/matlib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying numpy/_pytesttester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy copying build/src.macosx-10.15-x86_64-3.9/numpy/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy creating build/lib.macosx-10.15-x86_64-3.9/numpy/compat copying numpy/compat/py3k.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat copying numpy/compat/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat copying numpy/compat/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat copying numpy/compat/_inspect.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat creating build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/umath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/fromnumeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_dtype.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_add_newdocs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_methods.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_internal.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_string_helpers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/multiarray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/records.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/setup_common.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_aliased_types.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/memmap.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/overrides.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/getlimits.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_dtype_ctypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/defchararray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/machar.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/numeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/einsumfunc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/umath_tests.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/numerictypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/_type_aliases.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/cversions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/arrayprint.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core copying numpy/core/code_generators/generate_numpy_api.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/unixccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/numpy_distribution.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/conv_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/cpuinfo.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/msvc9compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/npy_pkg_config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/compat.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/misc_util.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/log.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/line_endings.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/lib2def.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/pathccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/system_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/exec_command.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/from_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/mingw32ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/extension.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/msvccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/intelccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying numpy/distutils/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils copying build/src.macosx-10.15-x86_64-3.9/numpy/distutils/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/build.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/config_compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/build_ext.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/install_headers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/build_py.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/build_src.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/sdist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/build_scripts.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/bdist_rpm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/install_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/build_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/autodist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/egg_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/install.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/develop.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command copying numpy/distutils/command/install_data.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/gnu.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/compaq.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/intel.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/none.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/nag.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/pg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/ibm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/sun.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/lahey.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/g95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/mips.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/hpux.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/environment.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/pathf95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/absoft.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/vast.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler creating build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/misc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/internals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/creation.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/constants.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/ufuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/broadcasting.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/basics.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/subclassing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/indexing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/byteswapping.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/structured_arrays.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc copying numpy/doc/glossary.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc creating build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/cfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/common_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/crackfortran.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/cb_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/f2py2e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/func2subr.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/diagnose.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/capi_maps.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/f90mod_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/f2py_testing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/use_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/auxfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py copying numpy/f2py/__main__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py creating build/lib.macosx-10.15-x86_64-3.9/numpy/fft copying numpy/fft/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft copying numpy/fft/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft copying numpy/fft/helper.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft copying numpy/fft/fftpack.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft copying numpy/fft/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft creating build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/_iotools.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/mixins.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/nanfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/recfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/histograms.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/scimath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/_version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/user_array.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/format.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/twodim_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/financial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/index_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/npyio.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/stride_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/arrayterator.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/arraysetops.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/arraypad.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/type_check.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/_datasource.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib copying numpy/lib/ufunclike.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib creating build/lib.macosx-10.15-x86_64-3.9/numpy/linalg copying numpy/linalg/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg copying numpy/linalg/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg copying numpy/linalg/linalg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg copying numpy/linalg/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg creating build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/extras.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/testutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/bench.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/timer_comparison.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma copying numpy/ma/mrecords.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma creating build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib copying numpy/matrixlib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib copying numpy/matrixlib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib copying numpy/matrixlib/defmatrix.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib creating build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/laguerre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/_polybase.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/polyutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/hermite_e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/chebyshev.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/legendre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial copying numpy/polynomial/hermite.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial creating build/lib.macosx-10.15-x86_64-3.9/numpy/random copying numpy/random/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random copying numpy/random/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random copying numpy/random/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/print_coercion_tables.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing copying numpy/testing/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private copying numpy/testing/_private/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private copying numpy/testing/_private/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private copying numpy/testing/_private/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private copying numpy/testing/_private/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private copying numpy/testing/_private/parameterized.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private copying numpy/testing/_private/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib building 'npymath' library compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9 creating build/temp.macosx-10.15-x86_64-3.9/numpy creating build/temp.macosx-10.15-x86_64-3.9/numpy/core creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath creating build/temp.macosx-10.15-x86_64-3.9/build creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9 creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: numpy/core/src/npymath/npy_math.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c clang: numpy/core/src/npymath/halffloat.c numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable] static const volatile npy_float tiny = 3.9443045e-31f; ^ numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable] static const npy_cfloat c_halff = {0.5F, 0.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable] static const npy_cfloat c_if = {0.0, 1.0F}; ^ numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable] static const npy_cfloat c_ihalff = {0.0, 0.5F}; ^ numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function] caddf(npy_cfloat a, npy_cfloat b) ^ numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function] csubf(npy_cfloat a, npy_cfloat b) ^ numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function] cnegf(npy_cfloat a) ^ numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function] cmulif(npy_cfloat a) ^ numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable] static const npy_cdouble c_half = {0.5, 0.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable] static const npy_cdouble c_i = {0.0, 1.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable] static const npy_cdouble c_ihalf = {0.0, 0.5}; ^ numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function] cadd(npy_cdouble a, npy_cdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function] csub(npy_cdouble a, npy_cdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function] cneg(npy_cdouble a) ^ numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function] cmuli(npy_cdouble a) ^ numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable] static const npy_clongdouble c_halfl = {0.5L, 0.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable] static const npy_clongdouble c_il = {0.0, 1.0L}; ^ numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable] static const npy_clongdouble c_ihalfl = {0.0, 0.5L}; ^ numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function] caddl(npy_clongdouble a, npy_clongdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function] csubl(npy_clongdouble a, npy_clongdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function] cnegl(npy_clongdouble a) ^ numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function] cmulil(npy_clongdouble a) ^ 22 warnings generated. ar: adding 4 object files to build/temp.macosx-10.15-x86_64-3.9/libnpymath.a ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpymath.a building 'npysort' library compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/quicksort.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/mergesort.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/heapsort.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/selection.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/binsearch.c numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code] npy_intp k; ^~~~~~~~~~~ numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead else if (0 && kth == num - 1) { ^ /* DISABLES CODE */ ( ) 22 warnings generated. ar: adding 5 object files to build/temp.macosx-10.15-x86_64-3.9/libnpysort.a ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpysort.a running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'numpy.core._dummy' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: numpy/core/src/dummymodule.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/dummymodule.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_dummy.cpython-39-darwin.so building 'numpy.core._multiarray_tests' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c clang: numpy/core/src/common/mem_overlap.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_tests.cpython-39-darwin.so building 'numpy.core._multiarray_umath' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath creating build/temp.macosx-10.15-x86_64-3.9/private creating build/temp.macosx-10.15-x86_64-3.9/private/var creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers' clang: numpy/core/src/multiarray/alloc.c clang: numpy/core/src/multiarray/calculation.cclang: numpy/core/src/multiarray/array_assign_scalar.c clang: numpy/core/src/multiarray/convert.c clang: numpy/core/src/multiarray/ctors.c clang: numpy/core/src/multiarray/datetime_busday.c clang: numpy/core/src/multiarray/dragon4.cclang: numpy/core/src/multiarray/flagsobject.c numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/ctors.c:2261:36: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ clang: numpy/core/src/multiarray/arrayobject.c clang: numpy/core/src/multiarray/array_assign_array.c clang: numpy/core/src/multiarray/convert_datatype.c clang: numpy/core/src/multiarray/getset.c clang: numpy/core/src/multiarray/datetime_busdaycal.c clang: numpy/core/src/multiarray/buffer.c clang: numpy/core/src/multiarray/compiled_base.c clang: numpy/core/src/multiarray/hashdescr.c clang: numpy/core/src/multiarray/descriptor.c numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] if (PyUString_GET_SIZE(name) == 0) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/descriptor.c:453:13: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] if (PyUString_GET_SIZE(name) == 0) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] if (PyUString_GET_SIZE(name) == 0) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/descriptor.c:460:48: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) { ^ numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE' #define PyUString_GET_SIZE PyUnicode_GET_SIZE ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ clang: numpy/core/src/multiarray/conversion_utils.c clang: numpy/core/src/multiarray/item_selection.c clang: numpy/core/src/multiarray/dtype_transfer.c clang: numpy/core/src/multiarray/mapping.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c 3 warnings generated. clang: numpy/core/src/multiarray/datetime.c numpy/core/src/multiarray/arraytypes.c.src:477:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] ptr = PyUnicode_AS_UNICODE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE' PyUnicode_AsUnicode(_PyObject_CAST(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] datalen = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] datalen = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] datalen = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ clang: numpy/core/src/multiarray/common.c numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] itemsize = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:187:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] itemsize = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] itemsize = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] itemsize = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:239:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] itemsize = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] itemsize = PyUnicode_GET_DATA_SIZE(temp); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] int itemsize = PyUnicode_GET_DATA_SIZE(obj); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:282:24: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] int itemsize = PyUnicode_GET_DATA_SIZE(obj); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] int itemsize = PyUnicode_GET_DATA_SIZE(obj); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ 6 warnings generated. clang: numpy/core/src/multiarray/nditer_pywrap.c 9 warnings generated. clang: numpy/core/src/multiarray/sequence.c clang: numpy/core/src/multiarray/shape.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c clang: numpy/core/src/multiarray/methods.c clang: numpy/core/src/multiarray/iterators.c clang: numpy/core/src/multiarray/datetime_strings.c clang: numpy/core/src/multiarray/number.c clang: numpy/core/src/multiarray/scalarapi.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c numpy/core/src/multiarray/scalarapi.c:74:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] return (void *)PyUnicode_AS_DATA(scalar); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA' ((const char *)(PyUnicode_AS_UNICODE(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE' PyUnicode_AsUnicode(_PyObject_CAST(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalarapi.c:135:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] return (void *)PyUnicode_AS_DATA(scalar); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA' ((const char *)(PyUnicode_AS_UNICODE(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE' PyUnicode_AsUnicode(_PyObject_CAST(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] descr->elsize = PyUnicode_GET_DATA_SIZE(sc); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalarapi.c:568:29: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] descr->elsize = PyUnicode_GET_DATA_SIZE(sc); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] descr->elsize = PyUnicode_GET_DATA_SIZE(sc); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] ip = dptr = PyUnicode_AS_UNICODE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE' PyUnicode_AsUnicode(_PyObject_CAST(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] len = PyUnicode_GET_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] len = PyUnicode_GET_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] len = PyUnicode_GET_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations] new = PyUnicode_FromUnicode(ip, len); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] ip = dptr = PyUnicode_AS_UNICODE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE' PyUnicode_AsUnicode(_PyObject_CAST(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] len = PyUnicode_GET_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] len = PyUnicode_GET_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] len = PyUnicode_GET_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations] new = PyUnicode_FromUnicode(ip, len); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:1849:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] buffer = PyUnicode_AS_DATA(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA' ((const char *)(PyUnicode_AS_UNICODE(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE' PyUnicode_AsUnicode(_PyObject_CAST(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] buflen = PyUnicode_GET_DATA_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] buflen = PyUnicode_GET_DATA_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] buflen = PyUnicode_GET_DATA_SIZE(self); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE' (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ 5 warnings generated. clang: numpy/core/src/multiarray/typeinfo.c clang: numpy/core/src/multiarray/refcount.c clang: numpy/core/src/multiarray/usertypes.c clang: numpy/core/src/multiarray/multiarraymodule.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c clang: numpy/core/src/multiarray/vdot.c clang: numpy/core/src/umath/umathmodule.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c clang: numpy/core/src/umath/reduction.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c clang: numpy/core/src/multiarray/nditer_api.c 14 warnings generated. clang: numpy/core/src/multiarray/strfuncs.c numpy/core/src/umath/loops.c.src:655:18: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations] result = PyEval_CallObject(tocall, arglist); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject' PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/strfuncs.c:178:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations] s = PyEval_CallObject(PyArray_ReprFunction, arglist); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject' PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/core/src/multiarray/strfuncs.c:195:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations] s = PyEval_CallObject(PyArray_StrFunction, arglist); ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject' PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ 2 warnings generated. clang: numpy/core/src/multiarray/temp_elide.c clang: numpy/core/src/umath/cpuid.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c clang: numpy/core/src/umath/ufunc_object.c numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'byte_long' [-Wunused-function] byte_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ubyte_long' [-Wunused-function] ubyte_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'short_long' [-Wunused-function] short_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ushort_long' [-Wunused-function] ushort_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'int_long' [-Wunused-function] int_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'uint_long' [-Wunused-function] uint_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'long_long' [-Wunused-function] long_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulong_long' [-Wunused-function] ulong_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longlong_long' [-Wunused-function] longlong_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulonglong_long' [-Wunused-function] ulonglong_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'half_long' [-Wunused-function] half_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'float_long' [-Wunused-function] float_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'double_long' [-Wunused-function] double_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longdouble_long' [-Wunused-function] longdouble_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cfloat_long' [-Wunused-function] cfloat_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cdouble_long' [-Wunused-function] cdouble_long(PyObject *obj) ^ numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'clongdouble_long' [-Wunused-function] clongdouble_long(PyObject *obj) ^ clang: numpy/core/src/multiarray/nditer_constr.c numpy/core/src/umath/ufunc_object.c:657:19: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare] for (i = 0; i < len; i++) { ~ ^ ~~~ clang: numpy/core/src/umath/override.c clang: numpy/core/src/npymath/npy_math.c clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code] npy_intp n = dimensions[0]; ^~~~~~~~~~ numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead if (IS_BINARY_REDUCE && 0) { ^ /* DISABLES CODE */ ( ) numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code] npy_intp n = dimensions[0]; ^~~~~~~~~~ numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead if (IS_BINARY_REDUCE && 0) { ^ /* DISABLES CODE */ ( ) numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code] npy_intp n = dimensions[0]; ^~~~~~~~~~ numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead if (IS_BINARY_REDUCE && 0) { ^ /* DISABLES CODE */ ( ) clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable] static const volatile npy_float tiny = 3.9443045e-31f; ^ numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable] static const npy_cfloat c_halff = {0.5F, 0.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable] static const npy_cfloat c_if = {0.0, 1.0F}; ^ numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable] static const npy_cfloat c_ihalff = {0.0, 0.5F}; ^ numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function] caddf(npy_cfloat a, npy_cfloat b) ^ numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function] csubf(npy_cfloat a, npy_cfloat b) ^ numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function] cnegf(npy_cfloat a) ^ numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function] cmulif(npy_cfloat a) ^ numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable] static const npy_cdouble c_half = {0.5, 0.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable] static const npy_cdouble c_i = {0.0, 1.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable] static const npy_cdouble c_ihalf = {0.0, 0.5}; ^ numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function] cadd(npy_cdouble a, npy_cdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function] csub(npy_cdouble a, npy_cdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function] cneg(npy_cdouble a) ^ numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function] cmuli(npy_cdouble a) ^ numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable] static const npy_clongdouble c_halfl = {0.5L, 0.0}; ^ numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable] static const npy_clongdouble c_il = {0.0, 1.0L}; ^ numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable] static const npy_clongdouble c_ihalfl = {0.0, 0.5L}; ^ numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function] caddl(npy_clongdouble a, npy_clongdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function] csubl(npy_clongdouble a, npy_clongdouble b) ^ numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function] cnegl(npy_clongdouble a) ^ numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function] cmulil(npy_clongdouble a) ^ 22 warnings generated. clang: numpy/core/src/common/mem_overlap.c clang: numpy/core/src/npymath/halffloat.c clang: numpy/core/src/common/array_assign.c clang: numpy/core/src/common/ufunc_override.c clang: numpy/core/src/common/npy_longdouble.c clang: numpy/core/src/common/numpyos.c clang: numpy/core/src/common/ucsnarrow.c 1 warning generated. clang: numpy/core/src/umath/extobj.c numpy/core/src/common/ucsnarrow.c:139:34: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations] ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE*)buf, ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ 1 warning generated. clang: numpy/core/src/common/python_xerbla.c clang: numpy/core/src/common/cblasfuncs.c clang: /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c In file included from /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c:26: In file included from numpy/core/include/numpy/arrayobject.h:4: In file included from numpy/core/include/numpy/ndarrayobject.h:21: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h:1463:1: warning: unused function '_import_array' [-Wunused-function] _import_array(void) ^ 1 warning generated. 17 warnings generated. clang: numpy/core/src/umath/ufunc_type_resolution.c 4 warnings generated. 4 warnings generated. clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/alloc.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arrayobject.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_scalar.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_array.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/buffer.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/calculation.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/compiled_base.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/common.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert_datatype.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/conversion_utils.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/ctors.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_strings.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busday.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busdaycal.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/descriptor.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dragon4.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dtype_transfer.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/flagsobject.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/getset.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/hashdescr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/item_selection.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/iterators.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/mapping.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/methods.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/multiarraymodule.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_api.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_constr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_pywrap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/number.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/refcount.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/sequence.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/shape.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalarapi.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/strfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/temp_elide.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/typeinfo.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/usertypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/vdot.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/umathmodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/reduction.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_object.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/extobj.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/cpuid.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_type_resolution.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/halffloat.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/array_assign.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/npy_longdouble.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ucsnarrow.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ufunc_override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/numpyos.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/cblasfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/python_xerbla.o build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -lnpysort -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_umath.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate building 'numpy.core._umath_tests' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_umath_tests.cpython-39-darwin.so building 'numpy.core._rational_tests' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_rational_tests.cpython-39-darwin.so building 'numpy.core._struct_ufunc_tests' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_struct_ufunc_tests.cpython-39-darwin.so building 'numpy.core._operand_flag_tests' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_operand_flag_tests.cpython-39-darwin.so building 'numpy.fft.fftpack_lite' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/numpy/fft compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: numpy/fft/fftpack_litemodule.c clang: numpy/fft/fftpack.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_lite.cpython-39-darwin.so building 'numpy.linalg.lapack_lite' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers' clang: numpy/linalg/lapack_litemodule.c clang: numpy/linalg/lapack_lite/python_xerbla.c clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate building 'numpy.linalg._umath_linalg' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers' clang: build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c numpy/linalg/umath_linalg.c.src:735:32: warning: unknown warning group '-Wmaybe-uninitialized', ignored [-Wunknown-warning-option] #pragma GCC diagnostic ignored "-Wmaybe-uninitialized" ^ numpy/linalg/umath_linalg.c.src:541:1: warning: unused function 'dump_ufunc_object' [-Wunused-function] dump_ufunc_object(PyUFuncObject* ufunc) ^ numpy/linalg/umath_linalg.c.src:566:1: warning: unused function 'dump_linearize_data' [-Wunused-function] dump_linearize_data(const char* name, const LINEARIZE_DATA_t* params) ^ numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_FLOAT_matrix' [-Wunused-function] dump_FLOAT_matrix(const char* name, ^ numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_DOUBLE_matrix' [-Wunused-function] dump_DOUBLE_matrix(const char* name, ^ numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CFLOAT_matrix' [-Wunused-function] dump_CFLOAT_matrix(const char* name, ^ numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CDOUBLE_matrix' [-Wunused-function] dump_CDOUBLE_matrix(const char* name, ^ numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_FLOAT_matrix' [-Wunused-function] zero_FLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data) ^ numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_DOUBLE_matrix' [-Wunused-function] zero_DOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data) ^ numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CFLOAT_matrix' [-Wunused-function] zero_CFLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data) ^ numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CDOUBLE_matrix' [-Wunused-function] zero_CDOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data) ^ numpy/linalg/umath_linalg.c.src:1862:1: warning: unused function 'dump_geev_params' [-Wunused-function] dump_geev_params(const char *name, GEEV_PARAMS_t* params) ^ numpy/linalg/umath_linalg.c.src:2132:1: warning: unused function 'init_cgeev' [-Wunused-function] init_cgeev(GEEV_PARAMS_t* params, ^ numpy/linalg/umath_linalg.c.src:2213:1: warning: unused function 'process_cgeev_results' [-Wunused-function] process_cgeev_results(GEEV_PARAMS_t *NPY_UNUSED(params)) ^ numpy/linalg/umath_linalg.c.src:2376:1: warning: unused function 'dump_gesdd_params' [-Wunused-function] dump_gesdd_params(const char *name, ^ numpy/linalg/umath_linalg.c.src:2864:1: warning: unused function 'dump_gelsd_params' [-Wunused-function] dump_gelsd_params(const char *name, ^ 16 warnings generated. clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/openssl@1.1/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/_umath_linalg.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate building 'numpy.random.mtrand' extension compiling C sources C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers creating build/temp.macosx-10.15-x86_64-3.9/numpy/random creating build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand compile options: '-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c' clang: numpy/random/mtrand/mtrand.c clang: numpy/random/mtrand/initarray.cclang: numpy/random/mtrand/randomkit.c clang: numpy/random/mtrand/distributions.c numpy/random/mtrand/mtrand.c:40400:34: error: no member named 'tp_print' in 'struct _typeobject' __pyx_type_6mtrand_RandomState.tp_print = 0; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42673:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42673:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42689:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op) : \ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42689:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE' ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\ ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations] (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE' PyUnicode_WSTR_LENGTH(op))) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH' #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here Py_DEPRECATED(3.3) ^ /usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ 12 warnings and 1 error generated. error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c numpy/random/mtrand/mtrand.c -o build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o -MMD -MF build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o.d" failed with exit status 1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1696/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1695/comments
https://api.github.com/repos/huggingface/datasets/issues/1695/events
https://github.com/huggingface/datasets/pull/1695
780,971,987
MDExOlB1bGxSZXF1ZXN0NTUwNzc1OTU4
1,695
fix ner_tag bugs in thainer
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
[]
closed
false
null
[]
null
[ "> Thanks :)\r\n> \r\n> Apparently the dummy_data.zip got removed. Is this expected ?\r\n> Also can you remove the `data-pos.conll` file that you added ?\r\n\r\nNot expected. I forgot to remove the `dummy_data` folder used to create `dummy_data.zip`. \r\nChanged to only `dummy_data.zip`." ]
"2021-01-07T02:12:33Z"
"2021-01-07T14:43:45Z"
"2021-01-07T14:43:28Z"
CONTRIBUTOR
null
fix bug that results in `ner_tag` always equal to 'O'.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1695/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1695/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1695.diff", "html_url": "https://github.com/huggingface/datasets/pull/1695", "merged_at": "2021-01-07T14:43:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1695.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1695" }
true
https://api.github.com/repos/huggingface/datasets/issues/1694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1694/comments
https://api.github.com/repos/huggingface/datasets/issues/1694/events
https://github.com/huggingface/datasets/pull/1694
780,429,080
MDExOlB1bGxSZXF1ZXN0NTUwMzI0Mjcx
1,694
Add OSCAR
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, on the OSCAR dataset, the document boundaries are defined by an empty line. Are there any chances to keep this empty line or explicitly group the sentences of a document? I'm asking for this 'cause I need to know if some sentences belong to the same document on my current OSCAR dataset usage.", "Indeed currently it yields one example per line and ignore the empty lines.\r\nMaybe the best is to group them by paragraph then, and yield one example when an empty line is found.\r\nWhat do you think ?", "I think to group them is the best choice indeed, I actually did this on [brwac](https://github.com/huggingface/datasets/tree/master/datasets/brwac) dataset too, it's another huge textual dataset.", "Ok I just launched the computation of the dataset_infos.json again by grouping lines in paragraphs.\r\nThe new _generate_examples is\r\n```python\r\n def _generate_examples(self, filepaths):\r\n \"\"\"This function returns the examples in the raw (text) form.\"\"\"\r\n id_ = 0\r\n current_lines = []\r\n for filepath in filepaths:\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with gzip.open(filepath, \"rt\") as f:\r\n for line in f:\r\n if len(line.strip()) > 0:\r\n current_lines.append(line)\r\n else:\r\n feature = id_, {\"id\": id_, \"text\": \"\".join(current_lines)}\r\n yield feature\r\n id_ += 1\r\n current_lines = []\r\n # last paragraph\r\n if current_lines:\r\n feature = id_, {\"id\": id_, \"text\": \"\".join(current_lines)}\r\n yield feature\r\n```", "Is there any chance to also keep the sentences raw (without the `\"\".join()`)?. This is useful if you wanna train models where one of the tasks you perform is document sentence permutation... that's my case :)", "They are raw in the sense that nothing is changed from the raw file for each paragraph.\r\nYou can split sentences on new lines `\\n` for example.\r\n\r\nThe first example for the unshuffled deduplicated english is going to be \r\n> Mtendere Village was inspired by the vision of Chief Napoleon Dzombe, which he shared with John Blanchard during his first visit to Malawi. Chief Napoleon conveyed the desperate need for a program to intervene and care for the orphans and vulnerable children (OVC) in Malawi, and John committed to help.\r\n> Established in honor of John & Lindy’s son, Christopher Blanchard, this particular program is very dear to the Blanchard family. Dana Blanchard, or Mama Dana as she is more commonly referred to at Mtendere, lived on site during the initial development, and she returns each summer to spend the season with her Malawian family. The heart of the program is to be His hands and feet by caring for the children at Mtendere, and meeting their spiritual, physical, academic, and emotional needs.\r\n> [...]\r\n> 100X Development Foundation, Inc. is registered 503 (c)(3) nonprofit organization. Donations are deductable to the full extent allowable under IRS regulations.", "I thought the line reader would omit the `\\n` character. I can easily split the sentences as you suggested. Thanks @lhoestq! 😃 ", "The recomputation of the metadata finished a few days ago, I'll update the PR soon :) ", "Let me know if you have comments @pjox @jonatasgrosman :) \r\n\r\nOtherwise we can merge it", "Everything seems fine to me 😄 " ]
"2021-01-06T10:21:08Z"
"2021-01-25T09:10:33Z"
"2021-01-25T09:10:32Z"
MEMBER
null
Continuation of #348 The files have been moved to S3 and only the unshuffled version is available. Both original and deduplicated versions of each language are available. Example of usage: ```python from datasets import load_dataset oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="train") oscar_orig_fr = load_dataset("oscar", "unshuffled_original_fr", split="train") ``` cc @pjox @jonatasgrosman ------------- To make the metadata generation work in parallel I did a few changes in the `datasets-cli test` command to add the `num_proc` and `proc_rank` arguments. This way you can run multiple processes for the metadata computation. ``` datasets-cli test ./datasets/oscar --save_infos --all_configs --num_proc 4 --proc_rank 0 --clear_cache --cache_dir tmp0 ``` ------------- ToDo: add the dummy_data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1694/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1694.diff", "html_url": "https://github.com/huggingface/datasets/pull/1694", "merged_at": "2021-01-25T09:10:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1694.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1694" }
true
https://api.github.com/repos/huggingface/datasets/issues/1693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1693/comments
https://api.github.com/repos/huggingface/datasets/issues/1693/events
https://github.com/huggingface/datasets/pull/1693
780,268,595
MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx
1,693
Fix reuters metadata parsing errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[]
"2021-01-06T08:26:03Z"
"2021-01-07T23:53:47Z"
"2021-01-07T14:01:22Z"
CONTRIBUTOR
null
Was missing the last entry in each metadata category
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1693/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1693.diff", "html_url": "https://github.com/huggingface/datasets/pull/1693", "merged_at": "2021-01-07T14:01:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/1693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1693" }
true
https://api.github.com/repos/huggingface/datasets/issues/1691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1691/comments
https://api.github.com/repos/huggingface/datasets/issues/1691/events
https://github.com/huggingface/datasets/pull/1691
779,882,271
MDExOlB1bGxSZXF1ZXN0NTQ5ODE3NTM0
1,691
Updated HuggingFace Datasets README (fix typos)
{ "avatar_url": "https://avatars.githubusercontent.com/u/19637339?v=4", "events_url": "https://api.github.com/users/8bitmp3/events{/privacy}", "followers_url": "https://api.github.com/users/8bitmp3/followers", "following_url": "https://api.github.com/users/8bitmp3/following{/other_user}", "gists_url": "https://api.github.com/users/8bitmp3/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/8bitmp3", "id": 19637339, "login": "8bitmp3", "node_id": "MDQ6VXNlcjE5NjM3MzM5", "organizations_url": "https://api.github.com/users/8bitmp3/orgs", "received_events_url": "https://api.github.com/users/8bitmp3/received_events", "repos_url": "https://api.github.com/users/8bitmp3/repos", "site_admin": false, "starred_url": "https://api.github.com/users/8bitmp3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/8bitmp3/subscriptions", "type": "User", "url": "https://api.github.com/users/8bitmp3" }
[]
closed
false
null
[]
null
[]
"2021-01-06T02:14:38Z"
"2021-01-16T23:30:47Z"
"2021-01-07T10:06:32Z"
CONTRIBUTOR
null
Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps. ![](https://emojipedia-us.s3.dualstack.us-west-1.amazonaws.com/thumbs/160/google/56/hugging-face_1f917.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1691/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1691/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1691.diff", "html_url": "https://github.com/huggingface/datasets/pull/1691", "merged_at": "2021-01-07T10:06:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1691.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1691" }
true
https://api.github.com/repos/huggingface/datasets/issues/1690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1690/comments
https://api.github.com/repos/huggingface/datasets/issues/1690/events
https://github.com/huggingface/datasets/pull/1690
779,441,631
MDExOlB1bGxSZXF1ZXN0NTQ5NDEwOTgw
1,690
Fast start up
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-05T19:07:53Z"
"2021-01-06T14:20:59Z"
"2021-01-06T14:20:58Z"
MEMBER
null
Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies. To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is being imported. On my side it changed the import time of `datasets` from 5sec to 0.5sec, which is enjoyable. To be able to check if optional dependencies are available without importing them I'm using `importlib_metadata`, which is part of the standard lib in python>=3.8 and was backported. The difference with `importlib` is that it also enables to get the versions of the libraries without importing them. I added this dependency in `setup.py`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 3, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/1690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1690/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1690.diff", "html_url": "https://github.com/huggingface/datasets/pull/1690", "merged_at": "2021-01-06T14:20:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1690.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1690" }
true
https://api.github.com/repos/huggingface/datasets/issues/1689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1689/comments
https://api.github.com/repos/huggingface/datasets/issues/1689/events
https://github.com/huggingface/datasets/pull/1689
779,107,313
MDExOlB1bGxSZXF1ZXN0NTQ5MTEwMDgw
1,689
Fix ade_corpus_v2 config names
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-05T14:33:28Z"
"2021-01-05T14:55:09Z"
"2021-01-05T14:55:08Z"
MEMBER
null
There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them: - Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification - Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation - Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1689/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1689.diff", "html_url": "https://github.com/huggingface/datasets/pull/1689", "merged_at": "2021-01-05T14:55:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/1689.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1689" }
true
https://api.github.com/repos/huggingface/datasets/issues/1688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1688/comments
https://api.github.com/repos/huggingface/datasets/issues/1688/events
https://github.com/huggingface/datasets/pull/1688
779,029,685
MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0
1,688
Fix DaNE last example
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-05T13:29:37Z"
"2021-01-05T14:00:15Z"
"2021-01-05T14:00:13Z"
MEMBER
null
The last example from the DaNE dataset is empty. Fix #1686
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1688/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1688.diff", "html_url": "https://github.com/huggingface/datasets/pull/1688", "merged_at": "2021-01-05T14:00:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/1688.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1688" }
true
https://api.github.com/repos/huggingface/datasets/issues/1687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1687/comments
https://api.github.com/repos/huggingface/datasets/issues/1687/events
https://github.com/huggingface/datasets/issues/1687
779,004,894
MDU6SXNzdWU3NzkwMDQ4OTQ=
1,687
Question: Shouldn't .info be a part of DatasetDict?
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
[]
open
false
null
[]
null
[ "We could do something. There is a part of `.info` which is split specific (cache files, split instructions) but maybe if could be made to work.", "Yes this was kinda the idea I was going for. DatasetDict.info would be the shared info amongs the datasets (maybe even some info on how they differ). " ]
"2021-01-05T13:08:41Z"
"2021-01-07T10:18:06Z"
null
CONTRIBUTOR
null
Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets. For instance: ``` >>> ds = datasets.load_dataset("conll2002", "es") >>> ds.info Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'DatasetDict' object has no attribute 'info' ``` I could imagine that this wouldn't work for datasets dicts which hold entirely different datasets (multimodal datasets), but it seems odd that splits of the same dataset is treated the same as what is essentially different datasets. Intuitively it would also make sense that if a dataset is supplied via. the load_dataset that is have a common .info which covers the entire dataset. It is entirely possible that I am missing another perspective
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1687/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1686/comments
https://api.github.com/repos/huggingface/datasets/issues/1686/events
https://github.com/huggingface/datasets/issues/1686
778,921,684
MDU6SXNzdWU3Nzg5MjE2ODQ=
1,686
Dataset Error: DaNE contains empty samples at the end
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, I opened a PR to fix that", "One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\n```", "If you have other questions feel free to reopen :) " ]
"2021-01-05T11:54:26Z"
"2021-01-05T14:01:09Z"
"2021-01-05T14:00:13Z"
CONTRIBUTOR
null
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors. ```python >>> import datasets [...] >>> dataset = datasets.load_dataset("dane") [...] >>> dataset["test"][-1] {'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []} >>> dataset["train"][-1] {'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []} ``` Best, Kenneth
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1686/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1685/comments
https://api.github.com/repos/huggingface/datasets/issues/1685/events
https://github.com/huggingface/datasets/pull/1685
778,914,431
MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2
1,685
Update README.md of covid-tweets-japanese
{ "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/forest1988", "id": 2755894, "login": "forest1988", "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "organizations_url": "https://api.github.com/users/forest1988/orgs", "received_events_url": "https://api.github.com/users/forest1988/received_events", "repos_url": "https://api.github.com/users/forest1988/repos", "site_admin": false, "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "type": "User", "url": "https://api.github.com/users/forest1988" }
[]
closed
false
null
[]
null
[ "Thanks for reviewing and merging!" ]
"2021-01-05T11:47:27Z"
"2021-01-06T10:27:12Z"
"2021-01-06T09:31:10Z"
CONTRIBUTOR
null
Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402. - Update "Data Splits" to be more precise that no information is provided for now. - old: [More Information Needed] - new: No information about data splits is provided for now. - The automatic generation of links seemed not working properly, so I added a space before and after the URL to make the links work correctly.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1685/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1685/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1685.diff", "html_url": "https://github.com/huggingface/datasets/pull/1685", "merged_at": "2021-01-06T09:31:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/1685.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1685" }
true
https://api.github.com/repos/huggingface/datasets/issues/1684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1684/comments
https://api.github.com/repos/huggingface/datasets/issues/1684/events
https://github.com/huggingface/datasets/pull/1684
778,356,196
MDExOlB1bGxSZXF1ZXN0NTQ4NDU3NDY1
1,684
Add CANER Corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KMFODA", "id": 35491698, "login": "KMFODA", "node_id": "MDQ6VXNlcjM1NDkxNjk4", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "repos_url": "https://api.github.com/users/KMFODA/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "type": "User", "url": "https://api.github.com/users/KMFODA" }
[]
closed
false
null
[]
null
[]
"2021-01-04T20:49:11Z"
"2021-01-25T09:09:20Z"
"2021-01-25T09:09:20Z"
CONTRIBUTOR
null
What does this PR do? Adds the following dataset: https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus Who can review? @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1684/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1684/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1684.diff", "html_url": "https://github.com/huggingface/datasets/pull/1684", "merged_at": "2021-01-25T09:09:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/1684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1684" }
true
https://api.github.com/repos/huggingface/datasets/issues/1683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1683/comments
https://api.github.com/repos/huggingface/datasets/issues/1683/events
https://github.com/huggingface/datasets/issues/1683
778,287,612
MDU6SXNzdWU3NzgyODc2MTI=
1,683
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abarbosa94", "id": 6608232, "login": "abarbosa94", "node_id": "MDQ6VXNlcjY2MDgyMzI=", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "repos_url": "https://api.github.com/users/abarbosa94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "type": "User", "url": "https://api.github.com/users/abarbosa94" }
[]
closed
false
null
[]
null
[ "Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]` ? In my opinion you only need one of them, not two.", "It makes sense :D\r\n\r\nIt seems to work! Thanks a lot :))\r\n\r\nClosing the issue" ]
"2021-01-04T18:47:53Z"
"2021-01-04T19:04:45Z"
"2021-01-04T19:04:45Z"
CONTRIBUTOR
null
It seems to fail the final batch ): steps to reproduce: ``` from datasets import load_dataset from elasticsearch import Elasticsearch import torch from transformers import file_utils, set_seed from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast MAX_SEQ_LENGTH = 256 ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", cache_dir="../datasets/") ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained( "facebook/dpr-ctx_encoder-single-nq-base", cache_dir="..datasets/" ) dataset = load_dataset('text', data_files='data/raw/ARC_Corpus.txt', cache_dir='../datasets') torch.set_grad_enabled(False) ds_with_embeddings = dataset.map( lambda example: { 'embeddings': ctx_encoder( **ctx_tokenizer( example["text"], padding='max_length', truncation=True, max_length=MAX_SEQ_LENGTH, return_tensors="pt" ) )[0][0].numpy(), }, batched=True, load_from_cache_file=False, batch_size=1000 ) ``` ARC Corpus can be obtained from [here](https://ai2-datasets.s3-us-west-2.amazonaws.com/arc/ARC-V1-Feb2018.zip) And then the error: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-67d139bb2ed3> in <module> 14 batched=True, 15 load_from_cache_file=False, ---> 16 batch_size=1000 17 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1257 fn_kwargs=fn_kwargs, 1258 new_fingerprint=new_fingerprint, -> 1259 update_data=update_data, 1260 ) 1261 else: ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 155 } 156 # apply actual function --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 159 # re-apply format to the output ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data) 1526 if update_data: 1527 batch = cast_to_python_objects(batch) -> 1528 writer.write_batch(batch) 1529 if update_data: 1530 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 276 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type) 277 typed_sequence_examples[col] = typed_sequence --> 278 pa_table = pa.Table.from_pydict(typed_sequence_examples) 279 self.write_table(pa_table) 280 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate() ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Column 1 named text expected length 768 but got length 1000 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1683/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1682/comments
https://api.github.com/repos/huggingface/datasets/issues/1682/events
https://github.com/huggingface/datasets/pull/1682
778,268,156
MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1
1,682
Don't use xlrd for xlsx files
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-01-04T18:11:50Z"
"2021-01-04T18:13:14Z"
"2021-01-04T18:13:13Z"
MEMBER
null
Since the latest release of `xlrd` (2.0), the support for xlsx files stopped. Therefore we needed to use something else. A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`. I left the unused import of `openpyxl` in the dataset scripts to show users that this is a required dependency to use the scripts. I tested the different datasets using `datasets-cli test` and the tests are successful (no missing examples).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1682/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1682/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1682.diff", "html_url": "https://github.com/huggingface/datasets/pull/1682", "merged_at": "2021-01-04T18:13:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/1682.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1682" }
true
https://api.github.com/repos/huggingface/datasets/issues/1681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1681/comments
https://api.github.com/repos/huggingface/datasets/issues/1681/events
https://github.com/huggingface/datasets/issues/1681
777,644,163
MDU6SXNzdWU3Nzc2NDQxNjM=
1,681
Dataset "dane" missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
[]
closed
false
null
[]
null
[ "Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip install git+https://github.com/huggingface/datasets.git@master", "The `dane` dataset was added recently, that's why it wasn't available yet. We did an intermediate release today just before the v2.0.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `dane` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"dane\")\r\n```", "Thanks. Solved the problem." ]
"2021-01-03T14:03:03Z"
"2021-01-05T08:35:35Z"
"2021-01-05T08:35:13Z"
CONTRIBUTOR
null
the `dane` dataset appear to be missing in the latest version (1.1.3). ```python >>> import datasets >>> datasets.__version__ '1.1.3' >>> "dane" in datasets.list_datasets() True ``` As we can see it should be present, but doesn't seem to be findable when using `load_dataset`. ```python >>> datasets.load_dataset("dane") Traceback (most recent call last): File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 300, in cached_path output_path = get_from_cache( File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dane/dane.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 300, in cached_path output_path = get_from_cache( File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dane/dane.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 588, in load_dataset module_path, hash = prepare_module( File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 280, in prepare_module raise FileNotFoundError( FileNotFoundError: Couldn't find file locally at dane/dane.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dane/dane.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dane/dane.py ``` This issue might be relevant to @ophelielacroix from the Alexandra Institut whom created the data.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1681/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1680/comments
https://api.github.com/repos/huggingface/datasets/issues/1680/events
https://github.com/huggingface/datasets/pull/1680
777,623,053
MDExOlB1bGxSZXF1ZXN0NTQ3ODY4MjEw
1,680
added TurkishProductReviews dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/41359672?v=4", "events_url": "https://api.github.com/users/basakbuluz/events{/privacy}", "followers_url": "https://api.github.com/users/basakbuluz/followers", "following_url": "https://api.github.com/users/basakbuluz/following{/other_user}", "gists_url": "https://api.github.com/users/basakbuluz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/basakbuluz", "id": 41359672, "login": "basakbuluz", "node_id": "MDQ6VXNlcjQxMzU5Njcy", "organizations_url": "https://api.github.com/users/basakbuluz/orgs", "received_events_url": "https://api.github.com/users/basakbuluz/received_events", "repos_url": "https://api.github.com/users/basakbuluz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/basakbuluz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/basakbuluz/subscriptions", "type": "User", "url": "https://api.github.com/users/basakbuluz" }
[]
closed
false
null
[]
null
[ "@lhoestq, can you please review this PR?", "Thanks for the suggestions. Updates were made and dataset_infos.json file was created again." ]
"2021-01-03T11:52:59Z"
"2021-01-04T18:15:35Z"
"2021-01-04T18:15:35Z"
CONTRIBUTOR
null
This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**. - **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data) - **Point of Contact:** Fatih Barmanbay - @fthbrmnby
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1680/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1680/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1680.diff", "html_url": "https://github.com/huggingface/datasets/pull/1680", "merged_at": "2021-01-04T18:15:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/1680.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1680" }
true
https://api.github.com/repos/huggingface/datasets/issues/1679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1679/comments
https://api.github.com/repos/huggingface/datasets/issues/1679/events
https://github.com/huggingface/datasets/issues/1679
777,587,792
MDU6SXNzdWU3Nzc1ODc3OTI=
1,679
Can't import cc100 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "events_url": "https://api.github.com/users/alighofrani95/events{/privacy}", "followers_url": "https://api.github.com/users/alighofrani95/followers", "following_url": "https://api.github.com/users/alighofrani95/following{/other_user}", "gists_url": "https://api.github.com/users/alighofrani95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alighofrani95", "id": 14968123, "login": "alighofrani95", "node_id": "MDQ6VXNlcjE0OTY4MTIz", "organizations_url": "https://api.github.com/users/alighofrani95/orgs", "received_events_url": "https://api.github.com/users/alighofrani95/received_events", "repos_url": "https://api.github.com/users/alighofrani95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alighofrani95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alighofrani95/subscriptions", "type": "User", "url": "https://api.github.com/users/alighofrani95" }
[]
closed
false
null
[]
null
[ "cc100 was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `cc100` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlang = \"en\"\r\ndataset = load_dataset(\"cc100\", lang=lang, split=\"train\")\r\n```" ]
"2021-01-03T07:12:56Z"
"2022-10-05T12:42:25Z"
"2022-10-05T12:42:25Z"
NONE
null
There is some issue to import cc100 dataset. ``` from datasets import load_dataset dataset = load_dataset("cc100") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/cc100/cc100.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 280 raise FileNotFoundError( 281 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 282 combined_path, github_file_path, file_path 283 ) 284 ) FileNotFoundError: Couldn't find file locally at cc100/cc100.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/cc100/cc100.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1679/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1678/comments
https://api.github.com/repos/huggingface/datasets/issues/1678/events
https://github.com/huggingface/datasets/pull/1678
777,567,920
MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy
1,678
Switchboard Dialog Act Corpus added under `datasets/swda`
{ "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "events_url": "https://api.github.com/users/gmihaila/events{/privacy}", "followers_url": "https://api.github.com/users/gmihaila/followers", "following_url": "https://api.github.com/users/gmihaila/following{/other_user}", "gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gmihaila", "id": 22454783, "login": "gmihaila", "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "organizations_url": "https://api.github.com/users/gmihaila/orgs", "received_events_url": "https://api.github.com/users/gmihaila/received_events", "repos_url": "https://api.github.com/users/gmihaila/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions", "type": "User", "url": "https://api.github.com/users/gmihaila" }
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.", "It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ", "Hi @lhoestq,\r\nI'm working on this to add the full dataset", "> It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik\r\n\r\n@lhoestq Any info on how to add them?", "@gmihaila, instead of using the current repo you should look into [this](https://github.com/cgpotts/swda). You can use the `csv` files uploaded in this repo (`swda.zip`) to access other fields and include them in this dataset. It has one dependency too, `swda.py`, you can download that separately and include it in your dataset's folder to be imported while reading the `csv` files.\r\n\r\nAlmost all the attributes of `Transcript` and `Utterance` objects are of the type str, int, or list. As far as `trees` attribute is concerned in utterance object you can simply parse it as string and user can maybe later convert it to nltk.tree object", "@bhavitvyamalik Thank you for the clarification! \r\n\r\nI didn't use [that](https://github.com/cgpotts/swda) because it doesn't have the splits. I think in combination with [what I used](https://github.com/NathanDuran/Switchboard-Corpus) would help.\r\n\r\nLet me know if I can help! I can make those changes if you don't have the time.", "I'm a bit busy for the next 2 weeks. I'll be able to complete it by end of January only. Maybe you can start with it and I'll help you?\r\nAlso, I looked into the official train/val/test splits and not all the files are there in the repo I used so I think either we'll have to skip them or put all of that into just train", "Yes, I can start working on it and ask you to do a code review.\r\n\r\nYes, not all files are there. I'll try to find papers that have the correct and full splits, if not, I'll do like you suggested.\r\n\r\nThank you again for your help @bhavitvyamalik !" ]
"2021-01-03T03:53:41Z"
"2021-01-08T18:09:21Z"
"2021-01-05T10:06:35Z"
CONTRIBUTOR
null
Switchboard Dialog Act Corpus Intro: The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2, with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s. Details: [homepage](http://compprag.christopherpotts.net/swda.html) [repo](https://github.com/NathanDuran/Switchboard-Corpus/raw/master/swda_data/) I believe this is an important dataset to have since there is no dataset related to dialogue act added. I didn't find any formatting for pull request. I hope all this information is enough. For any support please contact me.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1678/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1678.diff", "html_url": "https://github.com/huggingface/datasets/pull/1678", "merged_at": "2021-01-05T10:06:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/1678.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1678" }
true