id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
โ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,821,824,876 | Document setting up local dev env with same Poetry version as CI | Add to `DEVELOPER_GUIDE` docs how to set up local development environment with Poetry version 1.4.2, the same as in CI.
Fix #1565. | Document setting up local dev env with same Poetry version as CI: Add to `DEVELOPER_GUIDE` docs how to set up local development environment with Poetry version 1.4.2, the same as in CI.
Fix #1565. | closed | 2023-07-26T08:11:06Z | 2023-07-27T09:52:22Z | 2023-07-27T09:52:21Z | albertvillanova |
1,821,703,588 | Install same poetry version in local development environment as in CI | Currently, the `DEVELOPER_GUIDE` instruct to install the latest Poetry version (currently 1.5.1) to set up the local development environment, differently from the CI, which uses 1.4.2. | Install same poetry version in local development environment as in CI: Currently, the `DEVELOPER_GUIDE` instruct to install the latest Poetry version (currently 1.5.1) to set up the local development environment, differently from the CI, which uses 1.4.2. | closed | 2023-07-26T06:49:39Z | 2023-07-27T09:52:22Z | 2023-07-27T09:52:22Z | albertvillanova |
1,821,255,854 | feat: ๐ธ add heavy workers to help flush the queue | null | feat: ๐ธ add heavy workers to help flush the queue: | closed | 2023-07-25T22:25:31Z | 2023-07-25T22:26:25Z | 2023-07-25T22:26:25Z | severo |
1,821,104,475 | Add information about the storage locations on app startup | For all the apps (services, jobs, workers), emit logs at startup that describe the storage locations (and statistics about the space and inodes? + is it accessible by the runtime user?) | Add information about the storage locations on app startup: For all the apps (services, jobs, workers), emit logs at startup that describe the storage locations (and statistics about the space and inodes? + is it accessible by the runtime user?) | closed | 2023-07-25T20:34:19Z | 2024-06-19T14:18:42Z | 2024-06-19T14:18:41Z | severo |
1,821,101,939 | Mount all the storages in the "storage" pod | See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal) | Mount all the storages in the "storage" pod: See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal) | closed | 2023-07-25T20:32:29Z | 2023-08-25T15:06:35Z | 2023-08-25T15:06:35Z | severo |
1,821,099,533 | Check the disk usage of all the storages in metrics | See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal) | Check the disk usage of all the storages in metrics: See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal) | closed | 2023-07-25T20:30:48Z | 2023-08-11T20:51:45Z | 2023-08-11T20:51:45Z | severo |
1,821,070,069 | The api and rows services cannot store datasets cache | The datasets cache, for api and rows services (they depend on datasets), is not set, and by default is `/.cache/huggingface/datasets`. But this directory is not accessible by the python user.
I'm not sure if it's an issue, but I think we should:
- set the datasets environment variable for these services (note that all the pods that depend on libcommon potentially have the same issue, but not all of them use datasets)
- or better (but it's more work) create a `libs/libdatasets` library, that should only be used by /rows and the workers
| The api and rows services cannot store datasets cache: The datasets cache, for api and rows services (they depend on datasets), is not set, and by default is `/.cache/huggingface/datasets`. But this directory is not accessible by the python user.
I'm not sure if it's an issue, but I think we should:
- set the datasets environment variable for these services (note that all the pods that depend on libcommon potentially have the same issue, but not all of them use datasets)
- or better (but it's more work) create a `libs/libdatasets` library, that should only be used by /rows and the workers
| open | 2023-07-25T20:13:22Z | 2023-09-04T11:42:23Z | null | severo |
1,821,038,082 | Fix storage configs | null | Fix storage configs: | closed | 2023-07-25T19:55:26Z | 2023-07-25T20:22:19Z | 2023-07-25T20:22:18Z | severo |
1,820,707,381 | feat: ๐ธ request 4Gi per /rows pod | it will allocate less pods per node. We regularly have OOM errors, with pods that use 3, 5, 6, 7 GiB RAM, and the node has not enough RAM left for them.
<img width="305" alt="Capture dโeฬcran 2023-07-25 aฬ 12 29 26" src="https://github.com/huggingface/datasets-server/assets/1676121/6f237006-2bab-4604-82f7-2f0fa1a3ac0a">
| feat: ๐ธ request 4Gi per /rows pod: it will allocate less pods per node. We regularly have OOM errors, with pods that use 3, 5, 6, 7 GiB RAM, and the node has not enough RAM left for them.
<img width="305" alt="Capture dโeฬcran 2023-07-25 aฬ 12 29 26" src="https://github.com/huggingface/datasets-server/assets/1676121/6f237006-2bab-4604-82f7-2f0fa1a3ac0a">
| closed | 2023-07-25T16:31:45Z | 2023-07-25T19:32:53Z | 2023-07-25T16:40:07Z | severo |
1,820,638,038 | Update certifi to 2023.7.22 | Update `certifi` to 2023.7.22 in poetry lock files
Fix #1556 | Update certifi to 2023.7.22: Update `certifi` to 2023.7.22 in poetry lock files
Fix #1556 | closed | 2023-07-25T15:50:52Z | 2023-07-25T16:05:54Z | 2023-07-25T16:05:53Z | albertvillanova |
1,820,614,095 | Update certifi to 2023.7.22 | Our CI pip audit finds 1 vulnerability in: https://github.com/huggingface/datasets-server/actions/runs/5658665095/job/15330470711?pr=1555
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
------- -------- ------------------- ------------
certifi 2023.5.7 GHSA-xqr8-7jwr-rhp7 2023.7.22
``` | Update certifi to 2023.7.22 : Our CI pip audit finds 1 vulnerability in: https://github.com/huggingface/datasets-server/actions/runs/5658665095/job/15330470711?pr=1555
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
------- -------- ------------------- ------------
certifi 2023.5.7 GHSA-xqr8-7jwr-rhp7 2023.7.22
``` | closed | 2023-07-25T15:38:10Z | 2023-07-25T16:05:54Z | 2023-07-25T16:05:54Z | albertvillanova |
1,820,582,354 | Update poetry minor version in Dockerfiles and GH Actions | Update poetry minor version in Dockerfiles and GH Actions:
- From: 1.4.0
- To: 1.4.2
This way we integrate the bug fixes to 1.4.0. | Update poetry minor version in Dockerfiles and GH Actions: Update poetry minor version in Dockerfiles and GH Actions:
- From: 1.4.0
- To: 1.4.2
This way we integrate the bug fixes to 1.4.0. | closed | 2023-07-25T15:20:23Z | 2023-07-25T20:02:29Z | 2023-07-25T17:30:34Z | albertvillanova |
1,820,092,749 | Align poetry version in all Docker files | Currently, the Poetry version set in jobs/cache_maintenance Docker file is different from the one set in all the other Docker files.
This PR aligns the Poetry version in jobs/cache_maintenance Docker file with all the rest.
Related PRs:
- #1017
- #923 | Align poetry version in all Docker files: Currently, the Poetry version set in jobs/cache_maintenance Docker file is different from the one set in all the other Docker files.
This PR aligns the Poetry version in jobs/cache_maintenance Docker file with all the rest.
Related PRs:
- #1017
- #923 | closed | 2023-07-25T11:08:13Z | 2023-07-25T13:17:15Z | 2023-07-25T13:17:14Z | albertvillanova |
1,820,057,841 | Update locked cachecontrol yanked version in e2e | Update locked `cachecontrol` version from yanked 0.13.0 to 0.13.1 in `e2e` subpackage.
Related to:
- #1344
| Update locked cachecontrol yanked version in e2e: Update locked `cachecontrol` version from yanked 0.13.0 to 0.13.1 in `e2e` subpackage.
Related to:
- #1344
| closed | 2023-07-25T10:44:48Z | 2023-07-25T16:14:23Z | 2023-07-25T16:14:22Z | albertvillanova |
1,820,022,880 | Update huggingface-hub dependency to 0.16.4 version | After 0.16 `huggingface-hub` release, update dependencies on it.
Note that we remove the dependency on an explicit commit from `services/worker`.
Close #1487. | Update huggingface-hub dependency to 0.16.4 version: After 0.16 `huggingface-hub` release, update dependencies on it.
Note that we remove the dependency on an explicit commit from `services/worker`.
Close #1487. | closed | 2023-07-25T10:25:24Z | 2023-07-25T16:13:40Z | 2023-07-25T16:13:38Z | albertvillanova |
1,819,072,365 | feat: ๐ธ reduce resources | the queue is empty | feat: ๐ธ reduce resources: the queue is empty | closed | 2023-07-24T20:14:16Z | 2023-07-24T20:15:14Z | 2023-07-24T20:15:13Z | severo |
1,818,736,032 | upgrade datasets to 2.14 | https://github.com/huggingface/datasets/releases/tag/2.14.0
main changes:
- use `token` instead of `use_auth_token`
- the default config name is now `default` instead of `username--dataset_name`: we have to refresh all the datasets with only one config
TODO:
- [x] #1589
- [x] #1578
- [x] Refresh all the datasets with only one config
- [x] Refresh all the datasets with `StreamingRowsError`
TODO: 2.14.4
- [x] #1652
- [x] #1659 | upgrade datasets to 2.14: https://github.com/huggingface/datasets/releases/tag/2.14.0
main changes:
- use `token` instead of `use_auth_token`
- the default config name is now `default` instead of `username--dataset_name`: we have to refresh all the datasets with only one config
TODO:
- [x] #1589
- [x] #1578
- [x] Refresh all the datasets with only one config
- [x] Refresh all the datasets with `StreamingRowsError`
TODO: 2.14.4
- [x] #1652
- [x] #1659 | closed | 2023-07-24T16:15:34Z | 2023-09-06T12:33:52Z | 2023-09-06T00:20:25Z | severo |
1,818,703,864 | The /rows pods take too long to initialize | The pods for the /rows service can take up to 2 minutes to become available (ie respond on /healthcheck). | The /rows pods take too long to initialize: The pods for the /rows service can take up to 2 minutes to become available (ie respond on /healthcheck). | closed | 2023-07-24T15:56:05Z | 2023-08-24T15:13:56Z | 2023-08-24T15:13:56Z | severo |
1,818,687,092 | fix(helm): update probes for services pods | null | fix(helm): update probes for services pods: | closed | 2023-07-24T15:46:01Z | 2023-07-24T15:47:55Z | 2023-07-24T15:47:54Z | rtrompier |
1,818,686,624 | Update prod.yaml | remove bigcode/the-stack from supportedDatasets, since it's supported anyway (copy of parquet files) | Update prod.yaml: remove bigcode/the-stack from supportedDatasets, since it's supported anyway (copy of parquet files) | closed | 2023-07-24T15:45:44Z | 2023-07-24T15:46:25Z | 2023-07-24T15:45:49Z | severo |
1,818,643,276 | fix: use dedicated nodes for rows pods | null | fix: use dedicated nodes for rows pods: | closed | 2023-07-24T15:21:39Z | 2023-07-24T15:22:45Z | 2023-07-24T15:22:44Z | rtrompier |
1,818,639,949 | feat: ๐ธ unblock all datasets but Graphcore (and echarlaix) ones | 1aurent/icdar-2011,Abuelnour/json_1000_Scientific_Paper,Biomedical-TeMU/ProfNER_corpus_NER,Biomedical-TeMU/ProfNER_corpus_classification,Biomedical-TeMU/SPACCC_Sentence-Splitter,Carlisle/msmarco-passage-non-abs,Champion/vpc2020_clear_anon_speech,CristianaLazar/librispeech500,CristianaLazar/librispeech5k_train,DTU54DL/librispeech5k-augmentated-train-prepared,DavidVivancos/MindBigData2022_Imagenet_IN_Spct,EfaceD/ElysiumInspirations,HamdiJr/Egyptian_hieroglyphs,HuggingFaceM4/TextCaps,HuggingFaceM4/epic_kitchens_100,HuggingFaceM4/general-pmd-synthetic-testing,HuggingFaceM4/yttemporal180m,HugoLaurencon/IIIT-5K,HugoLaurencon/libri_light,HugoLaurencon/libri_light_bytes,Isma/librispeech_1000_seed_42,KETI-AIR/vqa,KnutJaegersberg/Interpretable_word_embeddings_large_cskg,KokeCacao/oracle,LanceaKing/asvspoof2019,Lehrig/GTZAN-Collection,Lehrig/Monkey-Species-Collection,LeoFeng/MLHW_6,Leyo/TGIF,Livingwithmachines/MapReader_Data_SIGSPATIAL_2022,MorVentura/TRBLLmaker,Murple/mmcrsc,NLPC-UOM/document_alignment_dataset-Sinhala-Tamil-English,Nart/parallel-ab-ru,Pinguin/images,PolyAI/evi,Poupou/Gitcoin-Grant-DataBuilder,RAYZ/Mixed-Dia,RaphaelOlivier/whisper_adversarial_examples,Rodion/uno_sustainable_development_goals,SamAct/medium_cleaned,Samip/Scotch,SocialGrep/the-reddit-climate-change-dataset,Sreyan88/librispeech_asr,TalTechNLP/VoxLingua107,Tevatron/xor-tydi-corpus,TomTBT/pmc_open_access_figure,TomTBT/pmc_open_access_section,Tristan/olm-october-2022-tokenized-1024-exact-dedup-only,Voicemod/LibriTTS-100-preproc,Whispering-GPT/linustechtips-transcript-audio,YWjimmy/PeRFception-ScanNet,YWjimmy/PeRFception-v1-1,YWjimmy/PeRFception-v1-2,YWjimmy/PeRFception-v1-3,Yehor/ukrainian-tts-lada,ZihaoLin/zhlds,albertvillanova/TextCaps,andreagasparini/librispeech_test_only,andreagasparini/librispeech_train_clean_only,andreagasparini/librispeech_train_other_only,arpelarpe/nota,autoevaluator/shoes-vs-sandals-vs-boots,azraahmadi/autotrain-data-xraydatasetp2,bengaliAI/cvbn,benschill/brain-tumor-collection,bigbio/anat_em,bigbio/ctebmsp,bigbio/distemist,bigcode/the-stack-username-to-repo,biglam/early_printed_books_font_detection,biglam/gallica_literary_fictions,bigscience/massive-probing-results,bio-datasets/e3c,biwi_kinect_head_pose,bruno-cotrim/arch-max,cahya/fleurs,cahya/librivox-indonesia,cameronbc/synthtiger,cjvt/cc_gigafida,cjvt/slo_collocations,cjvt/sloleks,cmudrc/porous-microstructure-strain-fields,cooleel/xfund_de,corentinm7/MyoQuant-SDH-Data,crystina-z/miracl-bm25-negative,crystina-z/mmarco,crystina-z/mmarco-corpus,crystina-z/msmarco-passage-dl19,crystina-z/msmarco-passage-dl20,crystina-z/no-nonself-mrtydi,crystina-z/xor-tydi-corpus,darkproger/librispeech_asr,dgrnd4/stanford_dog_dataset,dlwh/MultiLegalPile_Wikipedia_Shuffled,fcakyon/gun-object-detection,florianbussmann/train_tickets-yu2020pick,galman33/gal_yair_166000_256x256_fixed,genjib/LAVISHData,grasshoff/lhc_sents,guangguang/azukijpg,hr16/Miwano-Rag,icelab/ntrs_meta,ilhanemirhan/eee543,iluvvatar/RuREBus,imvladikon/paranames,indonesian-nlp/librivox-indonesia,inseq/divemt_attributions,izumaru/os2-datasets,jamescalam/movielens-25m-ratings,jamescalam/unsplash-25k-images,jerpint/imagenette,joefox/Mozilla_Common_Voice_ru_test_noise,joelito/MultiLegalPile_Wikipedia_Filtered,jpwahle/dblp-discovery-dataset,kaliansh/sdaia,keremberke/garbage-object-detection,keremberke/protective-equipment-detection,keremberke/smoke-object-detection,keshan/clean-si-mc4,keshan/multispeaker-tts-sinhala,khalidalt/tydiqa-primary,kresnik/librispeech_asr_test,ksaml/Stanford_dogs,lafi23333/ds,leviethoang/VBVLSP,m-aliabbas/idrak_splitted_amy_1,malteos/paperswithcode-aspects,marinone94/nst_no,marinone94/nst_sv,matchbench/dbp15k-fr-en,mathaillah/BeritaHoaks-NonHoaks,mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS,mesolitica/dbp,mesolitica/noisy-en-ms-augmentation,mesolitica/noisy-ms-en-augmentation,mesolitica/translated-SQUAD,metashift,momilla/Ethereum_transacitons,mozilla-foundation/common_voice_2_0,mozilla-foundation/common_voice_3_0,mozilla-foundation/common_voice_4_0,mozilla-foundation/common_voice_5_0,mozilla-foundation/common_voice_5_1,mozilla-foundation/common_voice_6_0,mulcyber/europarl-mono,mwhanna/ACT-Thor,mwitiderrick/arXiv,nateraw/auto-cats-and-dogs,nateraw/imagenet-sketch,nateraw/quickdraw,nateraw/rice-image-dataset,nateraw/rice-image-dataset-2,nateraw/wit,nev/anime-giph,nishita/ade20k-sample,nlphuji/utk_faces,nlphuji/vasr,nuprl/MultiPL-E-raw-data,nvm472001/cvdataset-layoutlmv3,openclimatefix/era5,openclimatefix/nimrod-uk-1km-validation,oyk100/ChaSES-data,parambharat/kannada_asr_corpus,parambharat/mile_dataset,parambharat/telugu_asr_corpus,plncmm/wl-disease,plncmm/wl-family-member,polinaeterna/vox_lingua,pragnakalp/squad_v2_french_translated,raghav66/whisper-gpt,robertmyers/pile_v2,rogerdehe/xfund,rohitp1/librispeech_asr_clean,rossevine/tesis,sanchit-gandhi/librispeech_asr_clean,severo/wit,shanya/crd3,sil-ai/audio-keyword-spotting,sil-ai/audio-kw-in-context,sjpmpzx/qm_ly_gy_soundn,sled-umich/Action-Effect,strombergnlp/broad_twitter_corpus,student/celebA,tau/mrqa,texturedesign/td01_natural-ground-textures,tilos/ASR-CCANTCSC,uva-irlab/trec-cast-2019-multi-turn,valurank/PoliticalBias_AllSides_Txt,voidful/librispeech_asr_text,winvoker/lvis,wmt/europarl,ywchoi/mdpi_sept10,z-uo/female-LJSpeech-italian,zyznull/dureader-retrieval-ranking,zyznull/msmarco-passage-corpus,zyznull/msmarco-passage-ranking
197 datasets unblocked.
Done with https://observablehq.com/@huggingface/blocked-datasets | feat: ๐ธ unblock all datasets but Graphcore (and echarlaix) ones: 1aurent/icdar-2011,Abuelnour/json_1000_Scientific_Paper,Biomedical-TeMU/ProfNER_corpus_NER,Biomedical-TeMU/ProfNER_corpus_classification,Biomedical-TeMU/SPACCC_Sentence-Splitter,Carlisle/msmarco-passage-non-abs,Champion/vpc2020_clear_anon_speech,CristianaLazar/librispeech500,CristianaLazar/librispeech5k_train,DTU54DL/librispeech5k-augmentated-train-prepared,DavidVivancos/MindBigData2022_Imagenet_IN_Spct,EfaceD/ElysiumInspirations,HamdiJr/Egyptian_hieroglyphs,HuggingFaceM4/TextCaps,HuggingFaceM4/epic_kitchens_100,HuggingFaceM4/general-pmd-synthetic-testing,HuggingFaceM4/yttemporal180m,HugoLaurencon/IIIT-5K,HugoLaurencon/libri_light,HugoLaurencon/libri_light_bytes,Isma/librispeech_1000_seed_42,KETI-AIR/vqa,KnutJaegersberg/Interpretable_word_embeddings_large_cskg,KokeCacao/oracle,LanceaKing/asvspoof2019,Lehrig/GTZAN-Collection,Lehrig/Monkey-Species-Collection,LeoFeng/MLHW_6,Leyo/TGIF,Livingwithmachines/MapReader_Data_SIGSPATIAL_2022,MorVentura/TRBLLmaker,Murple/mmcrsc,NLPC-UOM/document_alignment_dataset-Sinhala-Tamil-English,Nart/parallel-ab-ru,Pinguin/images,PolyAI/evi,Poupou/Gitcoin-Grant-DataBuilder,RAYZ/Mixed-Dia,RaphaelOlivier/whisper_adversarial_examples,Rodion/uno_sustainable_development_goals,SamAct/medium_cleaned,Samip/Scotch,SocialGrep/the-reddit-climate-change-dataset,Sreyan88/librispeech_asr,TalTechNLP/VoxLingua107,Tevatron/xor-tydi-corpus,TomTBT/pmc_open_access_figure,TomTBT/pmc_open_access_section,Tristan/olm-october-2022-tokenized-1024-exact-dedup-only,Voicemod/LibriTTS-100-preproc,Whispering-GPT/linustechtips-transcript-audio,YWjimmy/PeRFception-ScanNet,YWjimmy/PeRFception-v1-1,YWjimmy/PeRFception-v1-2,YWjimmy/PeRFception-v1-3,Yehor/ukrainian-tts-lada,ZihaoLin/zhlds,albertvillanova/TextCaps,andreagasparini/librispeech_test_only,andreagasparini/librispeech_train_clean_only,andreagasparini/librispeech_train_other_only,arpelarpe/nota,autoevaluator/shoes-vs-sandals-vs-boots,azraahmadi/autotrain-data-xraydatasetp2,bengaliAI/cvbn,benschill/brain-tumor-collection,bigbio/anat_em,bigbio/ctebmsp,bigbio/distemist,bigcode/the-stack-username-to-repo,biglam/early_printed_books_font_detection,biglam/gallica_literary_fictions,bigscience/massive-probing-results,bio-datasets/e3c,biwi_kinect_head_pose,bruno-cotrim/arch-max,cahya/fleurs,cahya/librivox-indonesia,cameronbc/synthtiger,cjvt/cc_gigafida,cjvt/slo_collocations,cjvt/sloleks,cmudrc/porous-microstructure-strain-fields,cooleel/xfund_de,corentinm7/MyoQuant-SDH-Data,crystina-z/miracl-bm25-negative,crystina-z/mmarco,crystina-z/mmarco-corpus,crystina-z/msmarco-passage-dl19,crystina-z/msmarco-passage-dl20,crystina-z/no-nonself-mrtydi,crystina-z/xor-tydi-corpus,darkproger/librispeech_asr,dgrnd4/stanford_dog_dataset,dlwh/MultiLegalPile_Wikipedia_Shuffled,fcakyon/gun-object-detection,florianbussmann/train_tickets-yu2020pick,galman33/gal_yair_166000_256x256_fixed,genjib/LAVISHData,grasshoff/lhc_sents,guangguang/azukijpg,hr16/Miwano-Rag,icelab/ntrs_meta,ilhanemirhan/eee543,iluvvatar/RuREBus,imvladikon/paranames,indonesian-nlp/librivox-indonesia,inseq/divemt_attributions,izumaru/os2-datasets,jamescalam/movielens-25m-ratings,jamescalam/unsplash-25k-images,jerpint/imagenette,joefox/Mozilla_Common_Voice_ru_test_noise,joelito/MultiLegalPile_Wikipedia_Filtered,jpwahle/dblp-discovery-dataset,kaliansh/sdaia,keremberke/garbage-object-detection,keremberke/protective-equipment-detection,keremberke/smoke-object-detection,keshan/clean-si-mc4,keshan/multispeaker-tts-sinhala,khalidalt/tydiqa-primary,kresnik/librispeech_asr_test,ksaml/Stanford_dogs,lafi23333/ds,leviethoang/VBVLSP,m-aliabbas/idrak_splitted_amy_1,malteos/paperswithcode-aspects,marinone94/nst_no,marinone94/nst_sv,matchbench/dbp15k-fr-en,mathaillah/BeritaHoaks-NonHoaks,mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS,mesolitica/dbp,mesolitica/noisy-en-ms-augmentation,mesolitica/noisy-ms-en-augmentation,mesolitica/translated-SQUAD,metashift,momilla/Ethereum_transacitons,mozilla-foundation/common_voice_2_0,mozilla-foundation/common_voice_3_0,mozilla-foundation/common_voice_4_0,mozilla-foundation/common_voice_5_0,mozilla-foundation/common_voice_5_1,mozilla-foundation/common_voice_6_0,mulcyber/europarl-mono,mwhanna/ACT-Thor,mwitiderrick/arXiv,nateraw/auto-cats-and-dogs,nateraw/imagenet-sketch,nateraw/quickdraw,nateraw/rice-image-dataset,nateraw/rice-image-dataset-2,nateraw/wit,nev/anime-giph,nishita/ade20k-sample,nlphuji/utk_faces,nlphuji/vasr,nuprl/MultiPL-E-raw-data,nvm472001/cvdataset-layoutlmv3,openclimatefix/era5,openclimatefix/nimrod-uk-1km-validation,oyk100/ChaSES-data,parambharat/kannada_asr_corpus,parambharat/mile_dataset,parambharat/telugu_asr_corpus,plncmm/wl-disease,plncmm/wl-family-member,polinaeterna/vox_lingua,pragnakalp/squad_v2_french_translated,raghav66/whisper-gpt,robertmyers/pile_v2,rogerdehe/xfund,rohitp1/librispeech_asr_clean,rossevine/tesis,sanchit-gandhi/librispeech_asr_clean,severo/wit,shanya/crd3,sil-ai/audio-keyword-spotting,sil-ai/audio-kw-in-context,sjpmpzx/qm_ly_gy_soundn,sled-umich/Action-Effect,strombergnlp/broad_twitter_corpus,student/celebA,tau/mrqa,texturedesign/td01_natural-ground-textures,tilos/ASR-CCANTCSC,uva-irlab/trec-cast-2019-multi-turn,valurank/PoliticalBias_AllSides_Txt,voidful/librispeech_asr_text,winvoker/lvis,wmt/europarl,ywchoi/mdpi_sept10,z-uo/female-LJSpeech-italian,zyznull/dureader-retrieval-ranking,zyznull/msmarco-passage-corpus,zyznull/msmarco-passage-ranking
197 datasets unblocked.
Done with https://observablehq.com/@huggingface/blocked-datasets | closed | 2023-07-24T15:19:50Z | 2023-07-24T15:21:46Z | 2023-07-24T15:21:45Z | severo |
1,818,585,100 | fix: resources allocation and use dedicated nodes for worker light | null | fix: resources allocation and use dedicated nodes for worker light: | closed | 2023-07-24T14:50:29Z | 2023-07-24T14:56:23Z | 2023-07-24T14:56:22Z | rtrompier |
1,818,585,045 | feat: ๐ธ unblock datasets with 200 downloads or more | GEM/BiSECT,GEM/references,GEM/xsum,HuggingFaceM4/charades,Karavet/ILUR-news-text-classification-corpus,Lacito/pangloss,SaulLu/Natural_Questions_HTML_reduced_all,SetFit/mnli,Tevatron/beir-corpus,Tevatron/wikipedia-curated-corpus,Tevatron/wikipedia-squad,Tevatron/wikipedia-squad-corpus,Tevatron/wikipedia-trivia-corpus,Tevatron/wikipedia-wq-corpus,angelolab/ark_example,ashraq/dhivehi-corpus,bigbio/ebm_pico,bnl_newspapers,castorini/msmarco_v1_passage_doc2query-t5_expansions,chenghao/scielo_books,clarin-pl/multiwiki_90k,gigant/m-ailabs_speech_dataset_fr,gigant/romanian_speech_synthesis_0_8_1,hebrew_projectbenyehuda,jimregan/clarinpl_sejmsenat,jimregan/clarinpl_studio,mozilla-foundation/common_voice_1_0,mteb/results,mteb/tatoeba-bitext-mining,nlphuji/winogavil,shunk031/cocostuff,shunk031/livedoor-news-corpus,society-ethics/lila_camera_traps,tab_fact,vblagoje/wikipedia_snippets_streamed
I did not remove the datasets
echarlaix/vqa,Graphcore/gqa,Graphcore/vqa,echarlaix/gqa-lxmert,Graphcore/gqa-lxmert,etc. because I remember they use far too much RAM.
Done with https://observablehq.com/@huggingface/blocked-datasets | feat: ๐ธ unblock datasets with 200 downloads or more: GEM/BiSECT,GEM/references,GEM/xsum,HuggingFaceM4/charades,Karavet/ILUR-news-text-classification-corpus,Lacito/pangloss,SaulLu/Natural_Questions_HTML_reduced_all,SetFit/mnli,Tevatron/beir-corpus,Tevatron/wikipedia-curated-corpus,Tevatron/wikipedia-squad,Tevatron/wikipedia-squad-corpus,Tevatron/wikipedia-trivia-corpus,Tevatron/wikipedia-wq-corpus,angelolab/ark_example,ashraq/dhivehi-corpus,bigbio/ebm_pico,bnl_newspapers,castorini/msmarco_v1_passage_doc2query-t5_expansions,chenghao/scielo_books,clarin-pl/multiwiki_90k,gigant/m-ailabs_speech_dataset_fr,gigant/romanian_speech_synthesis_0_8_1,hebrew_projectbenyehuda,jimregan/clarinpl_sejmsenat,jimregan/clarinpl_studio,mozilla-foundation/common_voice_1_0,mteb/results,mteb/tatoeba-bitext-mining,nlphuji/winogavil,shunk031/cocostuff,shunk031/livedoor-news-corpus,society-ethics/lila_camera_traps,tab_fact,vblagoje/wikipedia_snippets_streamed
I did not remove the datasets
echarlaix/vqa,Graphcore/gqa,Graphcore/vqa,echarlaix/gqa-lxmert,Graphcore/gqa-lxmert,etc. because I remember they use far too much RAM.
Done with https://observablehq.com/@huggingface/blocked-datasets | closed | 2023-07-24T14:50:27Z | 2023-07-24T14:51:19Z | 2023-07-24T14:51:18Z | severo |
1,818,511,087 | feat: ๐ธ unblock datasets with at least 3 likes | DelgadoPanadero/Pokemon,HuggingFaceM4/COCO,HuggingFaceM4/FairFace,HuggingFaceM4/VQAv2,HuggingFaceM4/cm4-synthetic-testing,Muennighoff/flores200,VIMA/VIMA-Data,alkzar90/CC6204-Hackaton-Cub-Dataset,asapp/slue,ashraf-ali/quran-data,biglam/brill_iconclass,ccdv/cnn_dailymail,ccdv/mediasum,chrisjay/mnist-adversarial-dataset,evidence_infer_treatment,gigant/african_accented_french,huggan/anime-faces,keremberke/nfl-object-detection,muchocine,opus_euconst,parambharat/malayalam_asr_corpus,stas/openwebtext-10k,textvqa,tner/wikiann
Done with https://observablehq.com/@huggingface/blocked-datasets | feat: ๐ธ unblock datasets with at least 3 likes: DelgadoPanadero/Pokemon,HuggingFaceM4/COCO,HuggingFaceM4/FairFace,HuggingFaceM4/VQAv2,HuggingFaceM4/cm4-synthetic-testing,Muennighoff/flores200,VIMA/VIMA-Data,alkzar90/CC6204-Hackaton-Cub-Dataset,asapp/slue,ashraf-ali/quran-data,biglam/brill_iconclass,ccdv/cnn_dailymail,ccdv/mediasum,chrisjay/mnist-adversarial-dataset,evidence_infer_treatment,gigant/african_accented_french,huggan/anime-faces,keremberke/nfl-object-detection,muchocine,opus_euconst,parambharat/malayalam_asr_corpus,stas/openwebtext-10k,textvqa,tner/wikiann
Done with https://observablehq.com/@huggingface/blocked-datasets | closed | 2023-07-24T14:10:03Z | 2023-07-24T14:11:58Z | 2023-07-24T14:11:57Z | severo |
1,818,069,042 | Add auth to first_rows_from_parquet | related to [ivrit-ai/audio-base](https://huggingface.co/datasets/ivrit-ai/audio-base)
it works the same way as in /rows | Add auth to first_rows_from_parquet: related to [ivrit-ai/audio-base](https://huggingface.co/datasets/ivrit-ai/audio-base)
it works the same way as in /rows | closed | 2023-07-24T09:56:56Z | 2023-07-24T10:17:31Z | 2023-07-24T10:17:30Z | lhoestq |
1,816,197,649 | feat: ๐ธ unblock 26 datasets (5 likes or more) | Unblocked datasets:
CodedotAI/code_clippy, HuggingFaceM4/TGIF, SLPL/naab-raw, SocialGrep/ten-million-reddit-answers, ami, backslashlim/LoRA-Datasets, biglam/nls_chapbook_illustrations, cats_vs_dogs, common_language, cornell_movie_dialog, dalle-mini/YFCC100M_OpenAI_subset, joelito/lextreme, lj_speech, mozilla-foundation/common_voice_10_0, multilingual_librispeech, nuprl/MultiPL-E, nyanko7/yandere-images, openslr, orieg/elsevier-oa-cc-by, qanastek/MASSIVE, tau/scrolls, turkic_xwmt, universal_morphologies, vctk, web_nlg, yhavinga/ccmatrix
Done with https://observablehq.com/@huggingface/blocked-datasets | feat: ๐ธ unblock 26 datasets (5 likes or more): Unblocked datasets:
CodedotAI/code_clippy, HuggingFaceM4/TGIF, SLPL/naab-raw, SocialGrep/ten-million-reddit-answers, ami, backslashlim/LoRA-Datasets, biglam/nls_chapbook_illustrations, cats_vs_dogs, common_language, cornell_movie_dialog, dalle-mini/YFCC100M_OpenAI_subset, joelito/lextreme, lj_speech, mozilla-foundation/common_voice_10_0, multilingual_librispeech, nuprl/MultiPL-E, nyanko7/yandere-images, openslr, orieg/elsevier-oa-cc-by, qanastek/MASSIVE, tau/scrolls, turkic_xwmt, universal_morphologies, vctk, web_nlg, yhavinga/ccmatrix
Done with https://observablehq.com/@huggingface/blocked-datasets | closed | 2023-07-21T18:11:34Z | 2023-07-21T18:12:25Z | 2023-07-21T18:12:24Z | severo |
1,816,046,131 | feat: ๐ธ unblock impactful datasets | reazon-research/reazonspeech, tapaco, ccdv/arxiv-summarization, competition_math, mozilla-foundation/common_voice_7_0, ds4sd/DocLayNet, beyond/chinese_clean_passages_80m, xglue, miracl/miracl, superb
done with https://observablehq.com/@huggingface/blocked-datasets | feat: ๐ธ unblock impactful datasets: reazon-research/reazonspeech, tapaco, ccdv/arxiv-summarization, competition_math, mozilla-foundation/common_voice_7_0, ds4sd/DocLayNet, beyond/chinese_clean_passages_80m, xglue, miracl/miracl, superb
done with https://observablehq.com/@huggingface/blocked-datasets | closed | 2023-07-21T16:01:02Z | 2023-07-21T16:02:41Z | 2023-07-21T16:02:40Z | severo |
1,814,793,743 | Update aiohttp | Fix aiohttp in admin_ui and libapi | Update aiohttp: Fix aiohttp in admin_ui and libapi | closed | 2023-07-20T21:01:42Z | 2023-07-20T21:17:44Z | 2023-07-20T21:17:43Z | AndreaFrancis |
1,814,420,492 | Update aiohttp dependency version | null | Update aiohttp dependency version: | closed | 2023-07-20T16:48:10Z | 2023-07-20T17:03:40Z | 2023-07-20T17:03:39Z | AndreaFrancis |
1,814,410,347 | K8s job to periodically remove indexes | Cron Job to delete downloaded files on https://github.com/huggingface/datasets-server/pull/1516 | K8s job to periodically remove indexes: Cron Job to delete downloaded files on https://github.com/huggingface/datasets-server/pull/1516 | closed | 2023-07-20T16:41:29Z | 2023-08-04T16:03:00Z | 2023-08-04T16:02:59Z | AndreaFrancis |
1,814,369,068 | chore(deps): bump aiohttp from 3.8.4 to 3.8.5 in /libs/libcommon | Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.4 to 3.8.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p>
<blockquote>
<h2>3.8.5</h2>
<h2>Security bugfixes</h2>
<ul>
<li>
<p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code>
and :user:<code>Dreamsorcerer</code>.</p>
<p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with
comprehensive reproducer, workarounds and fixing details! For more
information, see
<a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p>
<p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>)</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>
<p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>)</p>
</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>
<p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/3355">#3355</a>)</p>
</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst">aiohttp's changelog</a>.</em></p>
<blockquote>
<h1>3.8.5 (2023-07-19)</h1>
<h2>Security bugfixes</h2>
<ul>
<li>
<p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code>
and :user:<code>Dreamsorcerer</code>.</p>
<p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with
comprehensive reproducer, workarounds and fixing details! For more
information, see
<a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p>
<p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p>
<p><code>[#7346](https://github.com/aio-libs/aiohttp/issues/7346) <https://github.com/aio-libs/aiohttp/issues/7346></code>_</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>
<p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p>
<p><code>[#7366](https://github.com/aio-libs/aiohttp/issues/7366) <https://github.com/aio-libs/aiohttp/issues/7366></code>_</p>
</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>
<p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p>
<p><code>[#3355](https://github.com/aio-libs/aiohttp/issues/3355) <https://github.com/aio-libs/aiohttp/issues/3355></code>_</p>
</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aio-libs/aiohttp/commit/9c13a52c21c23dfdb49ed89418d28a5b116d0681"><code>9c13a52</code></a> Bump aiohttp to v3.8.5 a security release</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/7c02129567bc4ec59be467b70fc937c82920948c"><code>7c02129</code></a> ๏ฃ Bump pypa/cibuildwheel to v2.14.1</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/135a45e9d655d56e4ebad78abe84f1cb7b5c62dc"><code>135a45e</code></a> Improve error messages from C parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7380">#7380</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/9337fb3f2ab2b5f38d7e98a194bde6f7e3d16c40"><code>9337fb3</code></a> Fix bump llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7367">#7367</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7377">#7377</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/f07e9b44b5cb909054a697c8dd447b30dbf8073e"><code>f07e9b4</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7373">#7373</a>/66e261a5 backport][3.8] Drop azure mention (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7374">#7374</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/01d9b70e5477cd746561b52225992d8a2ebde953"><code>01d9b70</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7370">#7370</a>/22c264ce backport][3.8] fix: Spelling error fixed (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7371">#7371</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/3577b1e3719d4648fa973dbdec927f78f9df34dd"><code>3577b1e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7359">#7359</a>/7911f1e9 backport][3.8] ๏ฃ Set up secretless publishing to PyPI (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7360">#7360</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/8d45f9c99511cd80140d6658bd9c11002c697f1c"><code>8d45f9c</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7333">#7333</a>/3a54d378 backport][3.8] Fix TLS transport is <code>None</code> error (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7357">#7357</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/dd8e24e77351df9c0f029be49d3c6d7862706e79"><code>dd8e24e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7343">#7343</a>/18057581 backport][3.8] Mention encoding in <code>yarl.URL</code> (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7355">#7355</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/40874103ebfaa1007d47c25ecc4288af873a07cf"><code>4087410</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>/346fd202 backport][3.8] ๏ฃ Bump vendored llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7352">#7352</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/aio-libs/aiohttp/compare/v3.8.4...v3.8.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | chore(deps): bump aiohttp from 3.8.4 to 3.8.5 in /libs/libcommon: Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.4 to 3.8.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p>
<blockquote>
<h2>3.8.5</h2>
<h2>Security bugfixes</h2>
<ul>
<li>
<p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code>
and :user:<code>Dreamsorcerer</code>.</p>
<p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with
comprehensive reproducer, workarounds and fixing details! For more
information, see
<a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p>
<p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>)</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>
<p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>)</p>
</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>
<p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/3355">#3355</a>)</p>
</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst">aiohttp's changelog</a>.</em></p>
<blockquote>
<h1>3.8.5 (2023-07-19)</h1>
<h2>Security bugfixes</h2>
<ul>
<li>
<p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code>
and :user:<code>Dreamsorcerer</code>.</p>
<p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with
comprehensive reproducer, workarounds and fixing details! For more
information, see
<a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p>
<p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p>
<p><code>[#7346](https://github.com/aio-libs/aiohttp/issues/7346) <https://github.com/aio-libs/aiohttp/issues/7346></code>_</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>
<p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p>
<p><code>[#7366](https://github.com/aio-libs/aiohttp/issues/7366) <https://github.com/aio-libs/aiohttp/issues/7366></code>_</p>
</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>
<p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p>
<p><code>[#3355](https://github.com/aio-libs/aiohttp/issues/3355) <https://github.com/aio-libs/aiohttp/issues/3355></code>_</p>
</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aio-libs/aiohttp/commit/9c13a52c21c23dfdb49ed89418d28a5b116d0681"><code>9c13a52</code></a> Bump aiohttp to v3.8.5 a security release</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/7c02129567bc4ec59be467b70fc937c82920948c"><code>7c02129</code></a> ๏ฃ Bump pypa/cibuildwheel to v2.14.1</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/135a45e9d655d56e4ebad78abe84f1cb7b5c62dc"><code>135a45e</code></a> Improve error messages from C parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7380">#7380</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/9337fb3f2ab2b5f38d7e98a194bde6f7e3d16c40"><code>9337fb3</code></a> Fix bump llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7367">#7367</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7377">#7377</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/f07e9b44b5cb909054a697c8dd447b30dbf8073e"><code>f07e9b4</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7373">#7373</a>/66e261a5 backport][3.8] Drop azure mention (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7374">#7374</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/01d9b70e5477cd746561b52225992d8a2ebde953"><code>01d9b70</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7370">#7370</a>/22c264ce backport][3.8] fix: Spelling error fixed (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7371">#7371</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/3577b1e3719d4648fa973dbdec927f78f9df34dd"><code>3577b1e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7359">#7359</a>/7911f1e9 backport][3.8] ๏ฃ Set up secretless publishing to PyPI (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7360">#7360</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/8d45f9c99511cd80140d6658bd9c11002c697f1c"><code>8d45f9c</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7333">#7333</a>/3a54d378 backport][3.8] Fix TLS transport is <code>None</code> error (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7357">#7357</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/dd8e24e77351df9c0f029be49d3c6d7862706e79"><code>dd8e24e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7343">#7343</a>/18057581 backport][3.8] Mention encoding in <code>yarl.URL</code> (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7355">#7355</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/40874103ebfaa1007d47c25ecc4288af873a07cf"><code>4087410</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>/346fd202 backport][3.8] ๏ฃ Bump vendored llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7352">#7352</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/aio-libs/aiohttp/compare/v3.8.4...v3.8.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-07-20T16:18:56Z | 2023-07-20T17:55:12Z | 2023-07-20T17:55:08Z | dependabot[bot] |
1,814,324,677 | Remove datasets from the blocklist | The analysis is here: https://observablehq.com/@huggingface/blocked-datasets
We remove:
- the datasets that do not exist anymore on the Hub or are private
- the 5 most liked datasets: bigscience/P3, google/fleurs, mc4, bigscience/xP3, allenai/nllb | Remove datasets from the blocklist: The analysis is here: https://observablehq.com/@huggingface/blocked-datasets
We remove:
- the datasets that do not exist anymore on the Hub or are private
- the 5 most liked datasets: bigscience/P3, google/fleurs, mc4, bigscience/xP3, allenai/nllb | closed | 2023-07-20T15:53:57Z | 2023-07-20T16:24:25Z | 2023-07-20T16:24:24Z | severo |
1,813,537,760 | Separate parquet metadata by split | Since we added partial conversion to parquet, we introduced the new config/split/ssss.parquet paths but the parquet metadata worker was nos following it and therefore splits could overwrite each other
This affects any dataset with partial conversion and multiple splits, e.g. c4
Related to https://github.com/huggingface/datasets-server/issues/1483 | Separate parquet metadata by split: Since we added partial conversion to parquet, we introduced the new config/split/ssss.parquet paths but the parquet metadata worker was nos following it and therefore splits could overwrite each other
This affects any dataset with partial conversion and multiple splits, e.g. c4
Related to https://github.com/huggingface/datasets-server/issues/1483 | closed | 2023-07-20T09:16:19Z | 2023-07-20T14:03:08Z | 2023-07-20T13:17:35Z | lhoestq |
1,812,647,671 | provide one "partial" field per entry in aggregated responses | For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete.
Every entry in `configs` and `splits` should also include its own `partial` field, to be able to show this information in the viewer (selects)
- currently:
<img width="1528" alt="Capture dโeฬcran 2023-07-19 aฬ 16 00 28" src="https://github.com/huggingface/datasets-server/assets/1676121/92d27982-0fa3-44f2-a73f-a0ae614da40c">
- ideally, something like:
<img width="1529" alt="Capture dโeฬcran 2023-07-19 aฬ 16 01 39" src="https://github.com/huggingface/datasets-server/assets/1676121/c638af93-30de-4ab7-8fdd-389202d41c88">
Endpoints where we want these extra fields:
- /info, dataset-level
- /size, dataset-level
- /size, config-level
| provide one "partial" field per entry in aggregated responses: For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete.
Every entry in `configs` and `splits` should also include its own `partial` field, to be able to show this information in the viewer (selects)
- currently:
<img width="1528" alt="Capture dโeฬcran 2023-07-19 aฬ 16 00 28" src="https://github.com/huggingface/datasets-server/assets/1676121/92d27982-0fa3-44f2-a73f-a0ae614da40c">
- ideally, something like:
<img width="1529" alt="Capture dโeฬcran 2023-07-19 aฬ 16 01 39" src="https://github.com/huggingface/datasets-server/assets/1676121/c638af93-30de-4ab7-8fdd-389202d41c88">
Endpoints where we want these extra fields:
- /info, dataset-level
- /size, dataset-level
- /size, config-level
| open | 2023-07-19T20:01:58Z | 2024-05-16T09:36:20Z | null | severo |
1,811,756,650 | Fix libapi and rows in dev docker | null | Fix libapi and rows in dev docker: | closed | 2023-07-19T11:34:39Z | 2023-07-19T11:35:11Z | 2023-07-19T11:35:10Z | lhoestq |
1,810,669,838 | Moving some /rows shared utils | There are some classes, functions from /rows that will be used in https://github.com/huggingface/datasets-server/pull/1516 and https://github.com/huggingface/datasets-server/pull/1418, to avoid duplicate code, moving some of them to dedicated utils or to existing files.
| Moving some /rows shared utils: There are some classes, functions from /rows that will be used in https://github.com/huggingface/datasets-server/pull/1516 and https://github.com/huggingface/datasets-server/pull/1418, to avoid duplicate code, moving some of them to dedicated utils or to existing files.
| closed | 2023-07-18T20:30:41Z | 2023-07-18T20:49:56Z | 2023-07-18T20:49:55Z | AndreaFrancis |
1,810,545,680 | feat: ๐ธ unblock allenai/c4 | also: sort the list, and remove 4 duplicates | feat: ๐ธ unblock allenai/c4: also: sort the list, and remove 4 duplicates | closed | 2023-07-18T19:11:15Z | 2023-07-18T19:12:07Z | 2023-07-18T19:12:04Z | severo |
1,810,529,045 | Reduce the number of manually blocked datasets | 327 datasets (+ 4 duplicates) are currently blocked
https://github.com/huggingface/datasets-server/blob/902d9ac2cc951ed1a132086fc71d0aa70dc020fa/chart/env/prod.yaml#L116
With the improvements done ultimately, we should be able to remove many of them.
See https://github.com/huggingface/datasets-server/issues/1483#issuecomment-1640801975 | Reduce the number of manually blocked datasets: 327 datasets (+ 4 duplicates) are currently blocked
https://github.com/huggingface/datasets-server/blob/902d9ac2cc951ed1a132086fc71d0aa70dc020fa/chart/env/prod.yaml#L116
With the improvements done ultimately, we should be able to remove many of them.
See https://github.com/huggingface/datasets-server/issues/1483#issuecomment-1640801975 | closed | 2023-07-18T19:01:22Z | 2023-07-24T15:41:17Z | 2023-07-24T15:41:17Z | severo |
1,810,369,144 | /rows: raise an error if a dataset has too big row groups | It can happen if a dataset was converted to parquet before the recent row group size optimization, e.g. garythung/trashnet
Currently it makes the worker crash.
We could also refresh the parquet export of the dataset when this happens | /rows: raise an error if a dataset has too big row groups: It can happen if a dataset was converted to parquet before the recent row group size optimization, e.g. garythung/trashnet
Currently it makes the worker crash.
We could also refresh the parquet export of the dataset when this happens | closed | 2023-07-18T17:07:39Z | 2023-09-05T17:33:37Z | 2023-09-05T17:33:37Z | lhoestq |
1,810,242,251 | add Hub API convenience endpoint in parquet docs | close https://github.com/huggingface/datasets-server/issues/1400 | add Hub API convenience endpoint in parquet docs: close https://github.com/huggingface/datasets-server/issues/1400 | closed | 2023-07-18T16:00:15Z | 2023-07-19T12:03:07Z | 2023-07-19T12:02:36Z | lhoestq |
1,808,675,183 | Update dependencies cryptography and scipy | Updating dependencies to try to fix CI
name = "scipy" - version = "1.10.1" -> "1.11.1"
name = "cryptography" - version = "41.0.1" -> "41.0.2" | Update dependencies cryptography and scipy: Updating dependencies to try to fix CI
name = "scipy" - version = "1.10.1" -> "1.11.1"
name = "cryptography" - version = "41.0.1" -> "41.0.2" | closed | 2023-07-17T21:44:16Z | 2023-07-18T20:20:02Z | 2023-07-18T20:20:00Z | AndreaFrancis |
1,808,266,851 | feat: ๐ธ reduce resources | null | feat: ๐ธ reduce resources: | closed | 2023-07-17T17:44:24Z | 2023-07-17T17:44:59Z | 2023-07-17T17:44:30Z | severo |
1,808,208,578 | Ignore scipy in pip audit | ...to fix the ci. | Ignore scipy in pip audit: ...to fix the ci. | closed | 2023-07-17T17:10:07Z | 2023-07-17T17:40:32Z | 2023-07-17T17:20:32Z | lhoestq |
1,808,205,135 | Use `CONSTANT_LIST.copy` in list config fieds | See https://github.com/huggingface/datasets-server/pull/1508#discussion_r1265658458
In particular `get_empty_str_list` should not be used anymore. Same with `default_factory=list` | Use `CONSTANT_LIST.copy` in list config fieds: See https://github.com/huggingface/datasets-server/pull/1508#discussion_r1265658458
In particular `get_empty_str_list` should not be used anymore. Same with `default_factory=list` | open | 2023-07-17T17:08:34Z | 2023-08-17T15:44:09Z | null | severo |
1,808,172,649 | Create a new endpoint with info on size + parquet metadata | See https://github.com/huggingface/datasets-server/pull/1503#issuecomment-1625161886 | Create a new endpoint with info on size + parquet metadata: See https://github.com/huggingface/datasets-server/pull/1503#issuecomment-1625161886 | closed | 2023-07-17T16:50:50Z | 2024-02-09T10:22:01Z | 2024-02-09T10:22:01Z | severo |
1,808,146,416 | Always verify parquet before copying | close https://github.com/huggingface/datasets-server/issues/1519 | Always verify parquet before copying: close https://github.com/huggingface/datasets-server/issues/1519 | closed | 2023-07-17T16:35:22Z | 2023-07-18T17:47:35Z | 2023-07-18T17:17:41Z | lhoestq |
1,808,120,939 | Some datasets are converted to parquet with too big row groups, which makes the viewer crash | ... and workers to OOM
eg IDEA-CCNL/laion2B-multi-chinese-subset
```python
In [1]: import fsspec; import pyarrow.parquet as pq
In [2]: url = "https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-00000-of-00013.parquet"
In [3]: pf = pq.ParquetFile(fsspec.open(url).open())
In [4]: pf.metadata
Out[4]:
<pyarrow._parquet.FileMetaData object at 0x11f419450>
created_by: parquet-cpp-arrow version 7.0.0
num_columns: 10
num_rows: 11177146
num_row_groups: 1
format_version: 1.0
serialized_size: 5880
In [5]: pf.metadata.row_group(0).total_byte_size
Out[5]: 2085382973
In [6]: pf.metadata.row_group(0)
Out[6]:
<pyarrow._parquet.RowGroupMetaData object at 0x106609590>
num_columns: 10
num_rows: 11177146
total_byte_size: 2085382973
``` | Some datasets are converted to parquet with too big row groups, which makes the viewer crash: ... and workers to OOM
eg IDEA-CCNL/laion2B-multi-chinese-subset
```python
In [1]: import fsspec; import pyarrow.parquet as pq
In [2]: url = "https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-00000-of-00013.parquet"
In [3]: pf = pq.ParquetFile(fsspec.open(url).open())
In [4]: pf.metadata
Out[4]:
<pyarrow._parquet.FileMetaData object at 0x11f419450>
created_by: parquet-cpp-arrow version 7.0.0
num_columns: 10
num_rows: 11177146
num_row_groups: 1
format_version: 1.0
serialized_size: 5880
In [5]: pf.metadata.row_group(0).total_byte_size
Out[5]: 2085382973
In [6]: pf.metadata.row_group(0)
Out[6]:
<pyarrow._parquet.RowGroupMetaData object at 0x106609590>
num_columns: 10
num_rows: 11177146
total_byte_size: 2085382973
``` | closed | 2023-07-17T16:21:05Z | 2023-07-18T17:17:42Z | 2023-07-18T17:17:42Z | lhoestq |
1,804,224,185 | https://huggingface.co/datasets/ccmusic-database/vocal_range/discussions/1 | Could HF please print more error messages with code line for our own codes instead of official framework message? I t would be hard for us to debug when some error like this happens | https://huggingface.co/datasets/ccmusic-database/vocal_range/discussions/1: Could HF please print more error messages with code line for our own codes instead of official framework message? I t would be hard for us to debug when some error like this happens | closed | 2023-07-14T05:50:20Z | 2023-07-17T17:17:41Z | 2023-07-17T17:17:41Z | monetjoe |
1,801,569,193 | Reduce resources | null | Reduce resources: | closed | 2023-07-12T18:48:52Z | 2023-07-12T18:49:58Z | 2023-07-12T18:49:57Z | AndreaFrancis |
1,801,219,006 | feat: /search endpoint | Second part of FTS implementation using duckdb for https://github.com/huggingface/datasets-server/issues/629
This PR introduces a new endpoint `/search` in a new project **search** service with the following parameters:
- dataset
- config
- split
- query
- offset (by default 0)
- length (by default 100)
The process of performing a search consists in:
1. Validate parameters
2. Validate authentication
3. Validate split-duckdb-index cache result in order to verify if indexing has been previously done correctly
4. Download index file from /refs/convert/parquet revision in case of missing otherwise, use it directly
5. Perform full text search using duckdb
6. Return response with similar format as /rows
```
features: List[FeatureItem]
rows: Any
num_total_rows: int --> I added this new field because I think it will be used in the UI for pagination
```
Note.- In another PR, I will add a k8s Job that will periodically delete the downloaded files based on their last accessed date -> https://github.com/huggingface/datasets-server/pull/1536 | feat: /search endpoint: Second part of FTS implementation using duckdb for https://github.com/huggingface/datasets-server/issues/629
This PR introduces a new endpoint `/search` in a new project **search** service with the following parameters:
- dataset
- config
- split
- query
- offset (by default 0)
- length (by default 100)
The process of performing a search consists in:
1. Validate parameters
2. Validate authentication
3. Validate split-duckdb-index cache result in order to verify if indexing has been previously done correctly
4. Download index file from /refs/convert/parquet revision in case of missing otherwise, use it directly
5. Perform full text search using duckdb
6. Return response with similar format as /rows
```
features: List[FeatureItem]
rows: Any
num_total_rows: int --> I added this new field because I think it will be used in the UI for pagination
```
Note.- In another PR, I will add a k8s Job that will periodically delete the downloaded files based on their last accessed date -> https://github.com/huggingface/datasets-server/pull/1536 | closed | 2023-07-12T15:20:42Z | 2023-08-02T18:52:48Z | 2023-08-02T18:52:47Z | AndreaFrancis |
1,799,131,197 | Fix optional download_size | Should fix https://huggingface.co/datasets/Open-Orca/OpenOrca dataset viewer.
The dataset was stream converted completely so partial is False but the download_size is still None because streaming was used. | Fix optional download_size: Should fix https://huggingface.co/datasets/Open-Orca/OpenOrca dataset viewer.
The dataset was stream converted completely so partial is False but the download_size is still None because streaming was used. | closed | 2023-07-11T14:54:19Z | 2023-07-11T15:47:04Z | 2023-07-11T15:47:03Z | lhoestq |
1,799,011,327 | Adding partial ttl index to locks | Adding a 10 min TTL index to locks collections. | Adding partial ttl index to locks: Adding a 10 min TTL index to locks collections. | closed | 2023-07-11T13:59:34Z | 2023-07-11T15:05:08Z | 2023-07-11T15:05:06Z | AndreaFrancis |
1,798,939,875 | (minor) rename noMaxSizeLimitDatasets | just a better naming
following https://github.com/huggingface/datasets-server/pull/1508 | (minor) rename noMaxSizeLimitDatasets: just a better naming
following https://github.com/huggingface/datasets-server/pull/1508 | closed | 2023-07-11T13:24:01Z | 2023-07-11T14:47:43Z | 2023-07-11T14:47:18Z | lhoestq |
1,797,587,010 | Last index sync and Increase resources | It looks like db performance has been improved, try to increase resources to flush jobs queue. | Last index sync and Increase resources: It looks like db performance has been improved, try to increase resources to flush jobs queue. | closed | 2023-07-10T21:05:54Z | 2023-07-10T22:45:25Z | 2023-07-10T22:45:24Z | AndreaFrancis |
1,797,532,756 | Sync advised indexes by Atlas | Syncing advised indexes by Mongo Atlas and removing not used index (Replaced with a new one).
| Sync advised indexes by Atlas: Syncing advised indexes by Mongo Atlas and removing not used index (Replaced with a new one).
| closed | 2023-07-10T20:27:02Z | 2023-07-10T20:39:48Z | 2023-07-10T20:39:46Z | AndreaFrancis |
1,797,169,668 | Add priority param to force refresh endpoint | Right now I always have to manually set the priority in mongo to "normal" | Add priority param to force refresh endpoint: Right now I always have to manually set the priority in mongo to "normal" | closed | 2023-07-10T17:00:14Z | 2023-07-11T11:31:48Z | 2023-07-11T11:31:46Z | lhoestq |
1,796,790,972 | Try to improve index usage for Job collection | Removing some indexes already covered for other existing ones to speed query plan decision
Index, | Proposal | Index Alternative ย
-- | -- | --
"dataset", | DELETE: No query using only dataset | ย
("dataset", "revision", "status"), | DELETE | ("type", "dataset", "revision", "config", "split", "status", "priority"),
("type", "dataset", "status"), | KEEP | ย
("type", "dataset", "revision", "config", "split", "status", "priority"), | KEEP | ย
("priority", "status", "created_at", "type", "namespace"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"), | KEEP | ย
("priority", "status", "namespace", "type", "created_at"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("priority", "status", "created_at", "namespace", "-difficulty"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("status", "type"), | KEEP | ย
("status", "namespace", "priority", "type", "created_at"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("status", "namespace", "unicity_id", "priority", "type", "created_at"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
"-created_at", | DELETE: No query using only created_at | ย
"finished_at" | KEEP | ย
("unicity", "status", "created_at"), | KEEP | ย
| Try to improve index usage for Job collection: Removing some indexes already covered for other existing ones to speed query plan decision
Index, | Proposal | Index Alternative ย
-- | -- | --
"dataset", | DELETE: No query using only dataset | ย
("dataset", "revision", "status"), | DELETE | ("type", "dataset", "revision", "config", "split", "status", "priority"),
("type", "dataset", "status"), | KEEP | ย
("type", "dataset", "revision", "config", "split", "status", "priority"), | KEEP | ย
("priority", "status", "created_at", "type", "namespace"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"), | KEEP | ย
("priority", "status", "namespace", "type", "created_at"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("priority", "status", "created_at", "namespace", "-difficulty"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("status", "type"), | KEEP | ย
("status", "namespace", "priority", "type", "created_at"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
("status", "namespace", "unicity_id", "priority", "type", "created_at"), | DELETE | ("priority", "status", "created_at", "difficulty", "namespace", "type", "unicity_id"),
"-created_at", | DELETE: No query using only created_at | ย
"finished_at" | KEEP | ย
("unicity", "status", "created_at"), | KEEP | ย
| closed | 2023-07-10T13:30:07Z | 2023-07-10T17:02:53Z | 2023-07-10T17:02:52Z | AndreaFrancis |
1,796,535,091 | Add fully converted datasets | To fully convert https://huggingface.co/datasets/Open-Orca/OpenOrca to parquet (top 1 trending dataset right now) | Add fully converted datasets: To fully convert https://huggingface.co/datasets/Open-Orca/OpenOrca to parquet (top 1 trending dataset right now) | closed | 2023-07-10T11:14:40Z | 2023-07-17T17:19:27Z | 2023-07-10T13:47:44Z | lhoestq |
1,796,383,767 | Reduce rows lru cache | It was causing the workers memory to keep increasing and finally OOM.
If there are still memory errors after that I might remove the LRU cache altogether | Reduce rows lru cache: It was causing the workers memory to keep increasing and finally OOM.
If there are still memory errors after that I might remove the LRU cache altogether | closed | 2023-07-10T09:44:14Z | 2023-07-17T17:27:04Z | 2023-07-10T09:49:00Z | lhoestq |
1,793,852,790 | Memory efficient config-parquet-metadata | I moved some code to make the job write the parquet metadata on disk as they are downloaded, instead of keeping them all in RAM and write them all at the end.
should help for https://github.com/huggingface/datasets-server/issues/1502 | Memory efficient config-parquet-metadata: I moved some code to make the job write the parquet metadata on disk as they are downloaded, instead of keeping them all in RAM and write them all at the end.
should help for https://github.com/huggingface/datasets-server/issues/1502 | closed | 2023-07-07T16:51:19Z | 2023-07-10T11:14:46Z | 2023-07-10T11:14:45Z | lhoestq |
1,793,642,507 | Sync mongodb indexes | Syncing some existing and helpful indexes db->code (These already exists)
- ("priority", "status", "namespace", "type", "created_at"),
- ("priority", "status", "created_at", "namespace", "-difficulty"),
Removing some useless ones like:
- "status"
- ("type", "status")
- ("priority", "status", "created_at", "namespace", "unicity_id")
- ("priority", "status", "type", "created_at", "namespace", "unicity_id")
- ("priority", "status", "created_at", "namespace", "type", "unicity_id")
According to Atlas, those indexes are being used less than 1 time in a minute and could worst performance.
In any case, if those are needed, can be added again later. | Sync mongodb indexes : Syncing some existing and helpful indexes db->code (These already exists)
- ("priority", "status", "namespace", "type", "created_at"),
- ("priority", "status", "created_at", "namespace", "-difficulty"),
Removing some useless ones like:
- "status"
- ("type", "status")
- ("priority", "status", "created_at", "namespace", "unicity_id")
- ("priority", "status", "type", "created_at", "namespace", "unicity_id")
- ("priority", "status", "created_at", "namespace", "type", "unicity_id")
According to Atlas, those indexes are being used less than 1 time in a minute and could worst performance.
In any case, if those are needed, can be added again later. | closed | 2023-07-07T14:25:31Z | 2023-07-18T19:26:43Z | 2023-07-10T10:33:00Z | AndreaFrancis |
1,792,196,768 | Validate source Parquet files before linking in refs/convert/parquet | While trying to read a parquet file from dataset revision refs/convert/parquet (generated by datasets-server) with duckdb, it throws the following error:
```
D select * from 'https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train-100k/parquet-train.parquet' limit 10;
Error: Invalid Input Error: No magic bytes found at end of file 'https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train-100k/parquet-train.parquet'
```
Parquet file:
https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train-100k/parquet-train.parquet
Not sure if this is an error in the parquet generation logic. Should we validate non-corrupted files are being pushed to the dataset repo?
I was able to reproduce the same (corrupted error) with polars:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrea/.pyenv/versions/3.9.15/lib/python3.9/site-packages/polars/io/parquet/functions.py", line 123, in read_parquet
return pl.DataFrame._read_parquet(
File "/home/andrea/.pyenv/versions/3.9.15/lib/python3.9/site-packages/polars/dataframe/frame.py", line 865, in _read_parquet
self._df = PyDataFrame.read_parquet(
exceptions.ArrowErrorException: ExternalFormat("File out of specification: The file must end with PAR1")
```
| Validate source Parquet files before linking in refs/convert/parquet: While trying to read a parquet file from dataset revision refs/convert/parquet (generated by datasets-server) with duckdb, it throws the following error:
```
D select * from 'https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train-100k/parquet-train.parquet' limit 10;
Error: Invalid Input Error: No magic bytes found at end of file 'https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train-100k/parquet-train.parquet'
```
Parquet file:
https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train-100k/parquet-train.parquet
Not sure if this is an error in the parquet generation logic. Should we validate non-corrupted files are being pushed to the dataset repo?
I was able to reproduce the same (corrupted error) with polars:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrea/.pyenv/versions/3.9.15/lib/python3.9/site-packages/polars/io/parquet/functions.py", line 123, in read_parquet
return pl.DataFrame._read_parquet(
File "/home/andrea/.pyenv/versions/3.9.15/lib/python3.9/site-packages/polars/dataframe/frame.py", line 865, in _read_parquet
self._df = PyDataFrame.read_parquet(
exceptions.ArrowErrorException: ExternalFormat("File out of specification: The file must end with PAR1")
```
| open | 2023-07-06T20:35:45Z | 2024-06-19T14:17:51Z | null | AndreaFrancis |
1,791,866,859 | Remove parquet index without metadata | The parquet index without metadata is too slow and causes the majority of the OOMs on /rows atm.
I think it's best to remove it completely.
For now it would cause some datasets like [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) and [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) to have a pagination that returns an error.
PR https://github.com/huggingface/datasets-server/pull/1497 should fix it for [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), and I opened and issue https://github.com/huggingface/datasets-server/issues/1502 for [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
Ultimately we will need a way for the Hub to know if pagination is available (i.e. if parquet-metadata are available) instead of relying on the /size endpoint (or whatever it's using right now) | Remove parquet index without metadata: The parquet index without metadata is too slow and causes the majority of the OOMs on /rows atm.
I think it's best to remove it completely.
For now it would cause some datasets like [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) and [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) to have a pagination that returns an error.
PR https://github.com/huggingface/datasets-server/pull/1497 should fix it for [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), and I opened and issue https://github.com/huggingface/datasets-server/issues/1502 for [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
Ultimately we will need a way for the Hub to know if pagination is available (i.e. if parquet-metadata are available) instead of relying on the /size endpoint (or whatever it's using right now) | closed | 2023-07-06T16:23:30Z | 2023-07-17T16:51:08Z | 2023-07-07T09:50:36Z | lhoestq |
1,791,860,052 | Parquet metadata OOM for tiiuae/falcon-refinedweb | Logs show nothing but the kube YAML of the pod shows OOMKilled.
The job should be improved to be more memory efficient.
This issue prevents us from having pagination for this dataset. | Parquet metadata OOM for tiiuae/falcon-refinedweb: Logs show nothing but the kube YAML of the pod shows OOMKilled.
The job should be improved to be more memory efficient.
This issue prevents us from having pagination for this dataset. | closed | 2023-07-06T16:18:25Z | 2023-07-13T16:57:19Z | 2023-07-13T16:57:19Z | lhoestq |
1,791,745,222 | rollback: Exclude parquet volume from EFS | null | rollback: Exclude parquet volume from EFS: | closed | 2023-07-06T15:05:34Z | 2023-07-06T15:13:17Z | 2023-07-06T15:13:13Z | AndreaFrancis |
1,791,703,730 | Fix - Call volumeCache | null | Fix - Call volumeCache: | closed | 2023-07-06T14:42:06Z | 2023-07-06T14:43:15Z | 2023-07-06T14:43:13Z | AndreaFrancis |
1,791,665,812 | Fix volume refs in volumeMount | null | Fix volume refs in volumeMount: | closed | 2023-07-06T14:21:06Z | 2023-07-06T14:26:04Z | 2023-07-06T14:26:02Z | AndreaFrancis |
1,791,434,379 | Delete `/config-names` endpoint | Part of https://github.com/huggingface/datasets-server/issues/1086 | Delete `/config-names` endpoint: Part of https://github.com/huggingface/datasets-server/issues/1086 | closed | 2023-07-06T12:09:37Z | 2023-07-07T13:44:24Z | 2023-07-07T13:44:23Z | polinaeterna |
1,790,007,590 | Convert if too big row groups for copy | Before copying the parquet files I check that the row groups are not too big. Otherwise it can cause OOM for users that could like to use the parquet export, and also because it would make the dataset viewer too slow.
To do that, I check the first row group of the first parquet files and check their size.
If one row group is bigger than 500MB, we don't copy and `stream_to_parquet` is used instead.
Since we have the row group size, I also estimate an optimal `writer_batch_size` than I pass to `stream_to_parquet`. It must be a factor of 100 rows and have a row group size that is smaller than 500MB.
I currently set the limit to 500MB to fix https://huggingface.co/datasets/Open-Orca/OpenOrca, but if there are other problematic datasets that need to be handled this way we can adapt the limit.
close https://github.com/huggingface/datasets-server/issues/1491 | Convert if too big row groups for copy: Before copying the parquet files I check that the row groups are not too big. Otherwise it can cause OOM for users that could like to use the parquet export, and also because it would make the dataset viewer too slow.
To do that, I check the first row group of the first parquet files and check their size.
If one row group is bigger than 500MB, we don't copy and `stream_to_parquet` is used instead.
Since we have the row group size, I also estimate an optimal `writer_batch_size` than I pass to `stream_to_parquet`. It must be a factor of 100 rows and have a row group size that is smaller than 500MB.
I currently set the limit to 500MB to fix https://huggingface.co/datasets/Open-Orca/OpenOrca, but if there are other problematic datasets that need to be handled this way we can adapt the limit.
close https://github.com/huggingface/datasets-server/issues/1491 | closed | 2023-07-05T17:41:49Z | 2023-07-06T16:12:28Z | 2023-07-06T16:12:27Z | lhoestq |
1,789,913,319 | feat: ๐ธ reduce the number of workers | null | feat: ๐ธ reduce the number of workers: | closed | 2023-07-05T16:37:41Z | 2023-07-05T16:38:16Z | 2023-07-05T16:37:46Z | severo |
1,789,898,795 | Adding EFS volumes for cache, parquet and duckdb storage | Related to https://github.com/huggingface/datasets-server/issues/1407
Adding new volumes for: cache (datasets library), parquet and duckdb
Based on https://github.com/huggingface/infra/pull/607, the persistenceVolumeClaims should be:
- datasets-server-cache-pvc
- datasets-server-parquet-pvc
- datasets-server-duckdb-pvc
I think it depends on https://github.com/huggingface/infra/pull/607/files to be merged/implemented first. | Adding EFS volumes for cache, parquet and duckdb storage: Related to https://github.com/huggingface/datasets-server/issues/1407
Adding new volumes for: cache (datasets library), parquet and duckdb
Based on https://github.com/huggingface/infra/pull/607, the persistenceVolumeClaims should be:
- datasets-server-cache-pvc
- datasets-server-parquet-pvc
- datasets-server-duckdb-pvc
I think it depends on https://github.com/huggingface/infra/pull/607/files to be merged/implemented first. | closed | 2023-07-05T16:26:36Z | 2023-07-06T14:04:46Z | 2023-07-06T14:04:45Z | AndreaFrancis |
1,789,874,274 | feat: ๐ธ increase resources to flush the jobs | null | feat: ๐ธ increase resources to flush the jobs: | closed | 2023-07-05T16:09:43Z | 2023-07-05T16:10:16Z | 2023-07-05T16:09:48Z | severo |
1,789,859,611 | feat: ๐ธ avoid adding filters on difficulty when not needed | only add a filter if min > 0, or max < 100 | feat: ๐ธ avoid adding filters on difficulty when not needed: only add a filter if min > 0, or max < 100 | closed | 2023-07-05T16:02:10Z | 2023-07-05T16:07:47Z | 2023-07-05T16:07:45Z | severo |
1,789,808,996 | fix: ๐ ensure the env vars are int | note that the limits (min and max) will always be set in the mongo queries. I also added an index to make it quick but let's see if it works well. | fix: ๐ ensure the env vars are int: note that the limits (min and max) will always be set in the mongo queries. I also added an index to make it quick but let's see if it works well. | closed | 2023-07-05T15:32:01Z | 2023-07-05T15:35:19Z | 2023-07-05T15:35:17Z | severo |
1,789,729,737 | Use stream to parquet for slow parquet datasets | Use stream_to_parquet() for parquet datasets with too big row groups to rewrite the parquet data like https://huggingface.co/datasets/Open-Orca/OpenOrca in refs/convert/parquet | Use stream to parquet for slow parquet datasets: Use stream_to_parquet() for parquet datasets with too big row groups to rewrite the parquet data like https://huggingface.co/datasets/Open-Orca/OpenOrca in refs/convert/parquet | closed | 2023-07-05T14:49:39Z | 2023-07-06T16:12:28Z | 2023-07-06T16:12:28Z | lhoestq |
1,789,718,074 | Don't run config-parquet-metadata in light workers | Because it causes an OOM for all the big parquet datasets like the-stack, refinedweb etc.
causing their pagination to hang because it uses the parquet index without metadata which is too slow for big datasets | Don't run config-parquet-metadata in light workers: Because it causes an OOM for all the big parquet datasets like the-stack, refinedweb etc.
causing their pagination to hang because it uses the parquet index without metadata which is too slow for big datasets | closed | 2023-07-05T14:43:45Z | 2023-07-05T14:51:03Z | 2023-07-05T14:51:02Z | lhoestq |
1,789,502,766 | feat: ๐ธ add "difficulty" field to JobDocument | Difficulty is an integer between 0 (easy) and 100 (hard). It aims at filtering the jobs in a specific worker deployment, ie, light workers will only run jobs with difficulty <= 40.
It should make the query to MongoDB quicker than currently (a filter `type: {$in: ALLOW_LIST}`). See #1486.
For now, all the jobs for a specific step have the same difficulty, ie: 20 for dataset-size, since it's only JSON manipulation, while split-duckdb-index is 70 because it can use a lot of RAM and take a lot of time.
Maybe, one day, we could estimate the difficulty of the job, based on previous stats for the same dataset, for example. | feat: ๐ธ add "difficulty" field to JobDocument: Difficulty is an integer between 0 (easy) and 100 (hard). It aims at filtering the jobs in a specific worker deployment, ie, light workers will only run jobs with difficulty <= 40.
It should make the query to MongoDB quicker than currently (a filter `type: {$in: ALLOW_LIST}`). See #1486.
For now, all the jobs for a specific step have the same difficulty, ie: 20 for dataset-size, since it's only JSON manipulation, while split-duckdb-index is 70 because it can use a lot of RAM and take a lot of time.
Maybe, one day, we could estimate the difficulty of the job, based on previous stats for the same dataset, for example. | closed | 2023-07-05T12:57:49Z | 2023-07-05T15:06:01Z | 2023-07-05T15:06:00Z | severo |
1,789,378,839 | Delete `/parquet-and-dataset-info` endpoint | part of https://github.com/huggingface/datasets-server/issues/1086 | Delete `/parquet-and-dataset-info` endpoint: part of https://github.com/huggingface/datasets-server/issues/1086 | closed | 2023-07-05T11:43:48Z | 2023-07-05T12:45:40Z | 2023-07-05T12:45:38Z | polinaeterna |
1,789,375,763 | upgrade huggingface_hub to 0.16 | Needed because we currently rely on a specific commit. Better to depend on a released version | upgrade huggingface_hub to 0.16: Needed because we currently rely on a specific commit. Better to depend on a released version | closed | 2023-07-05T11:41:37Z | 2023-07-25T16:13:40Z | 2023-07-25T16:13:40Z | severo |
1,789,346,549 | feat: ๐ธ query operation $in is faster then $nin | null | feat: ๐ธ query operation $in is faster then $nin: | closed | 2023-07-05T11:22:55Z | 2023-07-05T11:23:44Z | 2023-07-05T11:23:33Z | severo |
1,789,326,871 | More logging for /rows | I'd like to understand better why specific requests take so long (eg refinedweb, the-stack).
Locally they work fine but take too much time in prod. | More logging for /rows: I'd like to understand better why specific requests take so long (eg refinedweb, the-stack).
Locally they work fine but take too much time in prod. | closed | 2023-07-05T11:10:08Z | 2023-07-05T13:00:49Z | 2023-07-05T13:00:48Z | lhoestq |
1,788,259,401 | feat: ๐ธ increase RAM for /rows service | Is it the right value @lhoestq ? | feat: ๐ธ increase RAM for /rows service: Is it the right value @lhoestq ? | closed | 2023-07-04T17:21:55Z | 2023-07-04T17:27:27Z | 2023-07-04T17:27:26Z | severo |
1,788,233,793 | C4 pagination failing | current error is an ApiError
```
config size could not be parsed: ValidationError: "size.config.num_bytes_original_files" must be a number
``` | C4 pagination failing: current error is an ApiError
```
config size could not be parsed: ValidationError: "size.config.num_bytes_original_files" must be a number
``` | closed | 2023-07-04T16:52:54Z | 2023-07-18T19:01:32Z | 2023-07-05T10:44:18Z | lhoestq |
1,788,182,042 | diagnose why the mongo server uses so much CPU | we have many alerts on the use of CPU on the mongo server.
```
System: CPU (User) % has gone above 95
```
Why? | diagnose why the mongo server uses so much CPU: we have many alerts on the use of CPU on the mongo server.
```
System: CPU (User) % has gone above 95
```
Why? | closed | 2023-07-04T16:04:06Z | 2024-02-06T14:49:20Z | 2024-02-06T14:49:19Z | severo |
1,788,051,826 | chore: update pypdf dependency in worker | Should fix https://github.com/huggingface/datasets-server/security/dependabot/200
But not sure where exactly we use this library, according to pypi page pypdf2 is now pypdf (Not sure if this will break something):
`NOTE: The PyPDF2 project is going back to its roots. PyPDF2==3.0.X will be the last version of PyPDF2. Development will continue with [pypdf==3.1.0](https://pypi.org/project/pyPdf/)` | chore: update pypdf dependency in worker: Should fix https://github.com/huggingface/datasets-server/security/dependabot/200
But not sure where exactly we use this library, according to pypi page pypdf2 is now pypdf (Not sure if this will break something):
`NOTE: The PyPDF2 project is going back to its roots. PyPDF2==3.0.X will be the last version of PyPDF2. Development will continue with [pypdf==3.1.0](https://pypi.org/project/pyPdf/)` | closed | 2023-07-04T14:31:22Z | 2023-07-05T15:18:30Z | 2023-07-05T15:18:29Z | AndreaFrancis |
1,788,045,153 | Minor fix in update_last_modified_date_of_rows_in_assets_dir | FileNotFoundError can happen because of concurrent api calls, and we can ignore it
(found this error while checking some logs today) | Minor fix in update_last_modified_date_of_rows_in_assets_dir: FileNotFoundError can happen because of concurrent api calls, and we can ignore it
(found this error while checking some logs today) | closed | 2023-07-04T14:28:32Z | 2023-07-04T20:31:31Z | 2023-07-04T20:31:30Z | lhoestq |
1,787,964,180 | Move the /rows endpoint to its own service | We create a new service, services/rows, which handles the /rows endpoint. services/api now serves the rest of the endpoints, but not /rows.
| Move the /rows endpoint to its own service: We create a new service, services/rows, which handles the /rows endpoint. services/api now serves the rest of the endpoints, but not /rows.
| closed | 2023-07-04T13:44:24Z | 2023-07-05T06:55:58Z | 2023-07-04T16:26:15Z | severo |
1,787,581,084 | Optional num_bytes_original_files | Because `download_size` is `None` in `config-and-parquet-info` if the dataset is >5GB | Optional num_bytes_original_files: Because `download_size` is `None` in `config-and-parquet-info` if the dataset is >5GB | closed | 2023-07-04T09:49:29Z | 2023-07-04T12:30:10Z | 2023-07-04T12:30:09Z | lhoestq |
1,787,544,567 | Unblock OSCAR | Now it can be converted to parquet (max 5GB)
I'll manually refresh it | Unblock OSCAR: Now it can be converted to parquet (max 5GB)
I'll manually refresh it | closed | 2023-07-04T09:32:30Z | 2023-07-04T09:38:23Z | 2023-07-04T09:38:22Z | lhoestq |
1,787,512,364 | More workers | Following #1448
note that `cache-maintenance` took care of running a backfill for the datasets >5GB:
```
"DatasetTooBigFromDatasetsError,DatasetTooBigFromHubError,DatasetWithTooBigExternalFilesError,DatasetWithTooManyExternalFilesError"
``` | More workers: Following #1448
note that `cache-maintenance` took care of running a backfill for the datasets >5GB:
```
"DatasetTooBigFromDatasetsError,DatasetTooBigFromHubError,DatasetWithTooBigExternalFilesError,DatasetWithTooManyExternalFilesError"
``` | closed | 2023-07-04T09:15:45Z | 2023-07-04T09:38:15Z | 2023-07-04T09:38:14Z | lhoestq |
1,787,510,453 | Create libapi | This PR prepares the creation of a new API service: services/rows | Create libapi: This PR prepares the creation of a new API service: services/rows | closed | 2023-07-04T09:14:34Z | 2023-07-04T13:32:20Z | 2023-07-04T13:32:18Z | severo |
1,786,901,380 | split-duckdb-index config | Moving config to worker folder since it is not used from any other project.
Also adding doc in the readme file. | split-duckdb-index config: Moving config to worker folder since it is not used from any other project.
Also adding doc in the readme file. | closed | 2023-07-03T22:55:48Z | 2023-07-04T13:32:38Z | 2023-07-04T13:32:37Z | AndreaFrancis |
1,786,615,494 | fix stream_convert_to_parquet for GeneratorBasedBuilder | got
```
"_prepare_split() missing 1 required positional argument: 'check_duplicate_keys'"
```
when converting C4 (config="en")
This query should return the jobs to re-run:
```
{kind: "config-parquet-and-info", http_status: 500, "details.error": "_prepare_split() missing 1 required positional argument: 'check_duplicate_keys'"}
``` | fix stream_convert_to_parquet for GeneratorBasedBuilder: got
```
"_prepare_split() missing 1 required positional argument: 'check_duplicate_keys'"
```
when converting C4 (config="en")
This query should return the jobs to re-run:
```
{kind: "config-parquet-and-info", http_status: 500, "details.error": "_prepare_split() missing 1 required positional argument: 'check_duplicate_keys'"}
``` | closed | 2023-07-03T18:10:44Z | 2023-07-03T18:31:47Z | 2023-07-03T18:31:46Z | lhoestq |
1,786,521,126 | How to show fan-in jobs' results in response ("pending" and "failed" keys) | In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key):
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
}
```
and for dataset-level it also has `pending` and `failed` keys:
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
"pending": [],
"failed": []
}
```
To me, undocumented `"pending"` and `"failed"` keys look a bit too technical and unclear.
What we can do:
* document what these keys mean
* don't document it but also for these kind of endpoints show only examples where all levels are specified (currently it's not like this). So, don't show examples that return `pending` and `failed` field.
* anything else? @huggingface/datasets-server | How to show fan-in jobs' results in response ("pending" and "failed" keys): In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key):
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
}
```
and for dataset-level it also has `pending` and `failed` keys:
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
"pending": [],
"failed": []
}
```
To me, undocumented `"pending"` and `"failed"` keys look a bit too technical and unclear.
What we can do:
* document what these keys mean
* don't document it but also for these kind of endpoints show only examples where all levels are specified (currently it's not like this). So, don't show examples that return `pending` and `failed` field.
* anything else? @huggingface/datasets-server | open | 2023-07-03T16:49:10Z | 2023-08-11T15:26:24Z | null | polinaeterna |
1,786,497,393 | Add partial to subsequent parquet-and-info jobs | Following #1448 we need all subsequent jobs to `config-parquet-and-info` to have the "partial" field:
- "config-parquet-and-info"
- "config-parquet"
- "dataset-parquet"
- "config-parquet-metadata"
- "config-info"
- "dataset-info"
For dataset level jobs, "partial" is True if there is at least one config with "partial == True".
I updated the code of all the jobs and update the migration job to add "partial" to existing entries.
I'll update the docs in another PR. | Add partial to subsequent parquet-and-info jobs: Following #1448 we need all subsequent jobs to `config-parquet-and-info` to have the "partial" field:
- "config-parquet-and-info"
- "config-parquet"
- "dataset-parquet"
- "config-parquet-metadata"
- "config-info"
- "dataset-info"
For dataset level jobs, "partial" is True if there is at least one config with "partial == True".
I updated the code of all the jobs and update the migration job to add "partial" to existing entries.
I'll update the docs in another PR. | closed | 2023-07-03T16:30:59Z | 2023-07-03T17:20:31Z | 2023-07-03T17:20:29Z | lhoestq |
1,786,295,177 | feat: ๐ธ use Normal priority only for API | webhook and jobs created when a requested cache entry is missing are the only ones with Priority.NORMAL. All the jobs created by administrative tasks (/force-refresh, /backfill, backfill job...) will use Priority.LOW. | feat: ๐ธ use Normal priority only for API: webhook and jobs created when a requested cache entry is missing are the only ones with Priority.NORMAL. All the jobs created by administrative tasks (/force-refresh, /backfill, backfill job...) will use Priority.LOW. | closed | 2023-07-03T14:28:50Z | 2023-07-03T15:26:56Z | 2023-07-03T15:26:55Z | severo |
1,786,026,117 | feat: ๐ธ backfill cache entries older than 90 days | See https://github.com/huggingface/datasets-server/issues/1219.
The idea is to have a general limitation on the duration of the cache. It will make it easier to delete unused resources (assets, cached assets, etc) later: everything older than 120 days (eg) can be deleted. | feat: ๐ธ backfill cache entries older than 90 days: See https://github.com/huggingface/datasets-server/issues/1219.
The idea is to have a general limitation on the duration of the cache. It will make it easier to delete unused resources (assets, cached assets, etc) later: everything older than 120 days (eg) can be deleted. | closed | 2023-07-03T11:55:11Z | 2023-07-03T15:55:06Z | 2023-07-03T15:55:05Z | severo |
1,786,014,299 | Rename `/dataset-info` endpoint to `/info` | Question: do we want to show results of steps that have `pending` and `failed` keys? I assume it might be not clear for users what that means, they also sound a bit too technical. Should we explain in the docs or just don't allow access to the dataset-level aggregations (but if so, why do we even need these cache entries). | Rename `/dataset-info` endpoint to `/info`: Question: do we want to show results of steps that have `pending` and `failed` keys? I assume it might be not clear for users what that means, they also sound a bit too technical. Should we explain in the docs or just don't allow access to the dataset-level aggregations (but if so, why do we even need these cache entries). | closed | 2023-07-03T11:48:06Z | 2023-07-03T17:11:35Z | 2023-07-03T17:11:05Z | polinaeterna |
1,785,704,254 | Some jobs have a "finished_at" date, but are still started or waiting |
```
db.jobsBlue.count({"finished_at": {"$exists": true}, "status": {"$nin": ["success", "error", "cancelled"]}})
24
```
For example:
```
{ _id: ObjectId("649f417b849c36335817cfa7"),
type: 'dataset-size',
dataset: 'knowrohit07/know_cot',
revision: 'f89e138e31115fd5b144aa0c52888316e710f752',
unicity_id: 'dataset-size,knowrohit07/know_cot',
namespace: 'knowrohit07',
priority: 'normal',
status: 'started',
created_at: 2023-06-30T20:56:27.051Z,
started_at: 2023-06-30T20:56:27.432Z,
finished_at: 2023-06-30T20:56:27.499Z }
```
And the cache entry:
```
{
"_id": { "$oid": "649f4175bd1b024a84d4b8c7" },
"config": null,
"dataset": "knowrohit07/know_cot",
"kind": "dataset-size",
"split": null,
"content": {
"size": {
"dataset": {
"dataset": "knowrohit07/know_cot",
"num_bytes_original_files": 37076410,
"num_bytes_parquet_files": 13932763,
"num_bytes_memory": 31107926,
"num_rows": 74771
},
"configs": [
{
"dataset": "knowrohit07/know_cot",
"config": "knowrohit07--know_cot",
"num_bytes_original_files": 37076410,
"num_bytes_parquet_files": 13932763,
"num_bytes_memory": 31107926,
"num_rows": 74771,
"num_columns": 3
}
],
"splits": [
{
"dataset": "knowrohit07/know_cot",
"config": "knowrohit07--know_cot",
"split": "train",
"num_bytes_parquet_files": 13932763,
"num_bytes_memory": 31107926,
"num_rows": 74771,
"num_columns": 3
}
]
},
"pending": [],
"failed": []
},
"dataset_git_revision": "f89e138e31115fd5b144aa0c52888316e710f752",
"details": null,
"error_code": null,
"http_status": 200,
"job_runner_version": 2,
"progress": 1,
"updated_at": { "$date": "2023-06-30T20:56:27.488Z" }
}
``` | Some jobs have a "finished_at" date, but are still started or waiting:
```
db.jobsBlue.count({"finished_at": {"$exists": true}, "status": {"$nin": ["success", "error", "cancelled"]}})
24
```
For example:
```
{ _id: ObjectId("649f417b849c36335817cfa7"),
type: 'dataset-size',
dataset: 'knowrohit07/know_cot',
revision: 'f89e138e31115fd5b144aa0c52888316e710f752',
unicity_id: 'dataset-size,knowrohit07/know_cot',
namespace: 'knowrohit07',
priority: 'normal',
status: 'started',
created_at: 2023-06-30T20:56:27.051Z,
started_at: 2023-06-30T20:56:27.432Z,
finished_at: 2023-06-30T20:56:27.499Z }
```
And the cache entry:
```
{
"_id": { "$oid": "649f4175bd1b024a84d4b8c7" },
"config": null,
"dataset": "knowrohit07/know_cot",
"kind": "dataset-size",
"split": null,
"content": {
"size": {
"dataset": {
"dataset": "knowrohit07/know_cot",
"num_bytes_original_files": 37076410,
"num_bytes_parquet_files": 13932763,
"num_bytes_memory": 31107926,
"num_rows": 74771
},
"configs": [
{
"dataset": "knowrohit07/know_cot",
"config": "knowrohit07--know_cot",
"num_bytes_original_files": 37076410,
"num_bytes_parquet_files": 13932763,
"num_bytes_memory": 31107926,
"num_rows": 74771,
"num_columns": 3
}
],
"splits": [
{
"dataset": "knowrohit07/know_cot",
"config": "knowrohit07--know_cot",
"split": "train",
"num_bytes_parquet_files": 13932763,
"num_bytes_memory": 31107926,
"num_rows": 74771,
"num_columns": 3
}
]
},
"pending": [],
"failed": []
},
"dataset_git_revision": "f89e138e31115fd5b144aa0c52888316e710f752",
"details": null,
"error_code": null,
"http_status": 200,
"job_runner_version": 2,
"progress": 1,
"updated_at": { "$date": "2023-06-30T20:56:27.488Z" }
}
``` | closed | 2023-07-03T09:05:37Z | 2023-08-29T14:07:03Z | 2023-08-29T14:07:03Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.