id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
β | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,505,256,009 | feat: πΈ upgrade datasets to 2.8.0 | see https://github.com/huggingface/datasets/releases/tag/2.8.0
remove fix because https://github.com/huggingface/datasets/pull/5333 has been included in the release, and remove deprecated use_auth_token argument in download_and_prepare | feat: πΈ upgrade datasets to 2.8.0: see https://github.com/huggingface/datasets/releases/tag/2.8.0
remove fix because https://github.com/huggingface/datasets/pull/5333 has been included in the release, and remove deprecated use_auth_token argument in download_and_prepare | closed | 2022-12-20T20:51:30Z | 2022-12-20T21:13:35Z | 2022-12-20T21:13:34Z | severo |
1,505,200,976 | Fix empty commits | null | Fix empty commits: | closed | 2022-12-20T20:19:00Z | 2022-12-20T20:41:40Z | 2022-12-20T20:41:39Z | severo |
1,496,729,338 | docs: βοΈ fix doc | null | docs: βοΈ fix doc: | closed | 2022-12-14T14:16:14Z | 2022-12-14T15:01:49Z | 2022-12-14T14:58:49Z | severo |
1,492,480,177 | feat: πΈ add method to get the duration of the jobs per dataset | null | feat: πΈ add method to get the duration of the jobs per dataset: | closed | 2022-12-12T18:20:38Z | 2022-12-13T10:11:55Z | 2022-12-13T10:11:54Z | severo |
1,492,401,863 | feat: πΈ update the production parameters | null | feat: πΈ update the production parameters: | closed | 2022-12-12T17:36:56Z | 2022-12-12T17:37:09Z | 2022-12-12T17:37:08Z | severo |
1,488,866,831 | Dataset Viewer issue for chengan/tedlium_small | ### Link
https://huggingface.co/datasets/chengan/tedlium_small
### Description
The dataset viewer is not working for dataset chengan/tedlium_small.
Error details:
```
Error code: ClientConnectionError
```
| Dataset Viewer issue for chengan/tedlium_small: ### Link
https://huggingface.co/datasets/chengan/tedlium_small
### Description
The dataset viewer is not working for dataset chengan/tedlium_small.
Error details:
```
Error code: ClientConnectionError
```
| closed | 2022-12-10T19:34:56Z | 2022-12-13T10:39:57Z | 2022-12-13T10:39:57Z | basanthsk |
1,487,340,938 | Dataset Viewer issue for tarteel-ai/everyayah | ### Link
https://huggingface.co/datasets/tarteel-ai/everyayah
### Description
The dataset viewer is not working for dataset tarteel-ai/everyayah.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for tarteel-ai/everyayah: ### Link
https://huggingface.co/datasets/tarteel-ai/everyayah
### Description
The dataset viewer is not working for dataset tarteel-ai/everyayah.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2022-12-09T19:33:48Z | 2022-12-13T10:39:54Z | 2022-12-13T10:39:54Z | msis |
1,487,203,884 | Dataset Viewer issue for julien-c/autotrain-dreambooth-marsupilami-data | ### Link
https://huggingface.co/datasets/julien-c/autotrain-dreambooth-marsupilami-data
### Description
The dataset viewer is not working for dataset julien-c/autotrain-dreambooth-marsupilami-data.
Error details:
```
Error code: ClientConnectionError
```
| Dataset Viewer issue for julien-c/autotrain-dreambooth-marsupilami-data: ### Link
https://huggingface.co/datasets/julien-c/autotrain-dreambooth-marsupilami-data
### Description
The dataset viewer is not working for dataset julien-c/autotrain-dreambooth-marsupilami-data.
Error details:
```
Error code: ClientConnectionError
```
| closed | 2022-12-09T18:13:58Z | 2022-12-09T18:15:59Z | 2022-12-09T18:15:58Z | julien-c |
1,476,175,495 | feat: πΈ upgrade from python 3.9.6 to 3.9.15 | because the CI is failing because 3.9.6 seems to have been removed for Ubuntu version 22.04 | feat: πΈ upgrade from python 3.9.6 to 3.9.15: because the CI is failing because 3.9.6 seems to have been removed for Ubuntu version 22.04 | closed | 2022-12-05T10:04:20Z | 2022-12-05T10:26:57Z | 2022-12-05T10:23:57Z | severo |
1,473,143,545 | Support HDF5 datasets | Can we add [h5py](https://github.com/h5py/h5py) as a dependency to the viewer workers ? It's a pretty common dependency for vision datasets since it allows to read hdf5 files which is common in this field (used by 180k+ repos on github)
related to https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/discussions/5#638a1d4dc7f3cc0d220ea643
cc @severo | Support HDF5 datasets: Can we add [h5py](https://github.com/h5py/h5py) as a dependency to the viewer workers ? It's a pretty common dependency for vision datasets since it allows to read hdf5 files which is common in this field (used by 180k+ repos on github)
related to https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/discussions/5#638a1d4dc7f3cc0d220ea643
cc @severo | closed | 2022-12-02T16:40:53Z | 2022-12-02T17:00:32Z | 2022-12-02T17:00:32Z | lhoestq |
1,471,056,766 | Merge the workers that rely on the datasets library | The docker image size for a worker based on the datasets library is large, mainly due to two dependencies: PyTorch and TensorFlow (about 4GB).
We create /workers/datasets_based for all the workers that depend on datasets and chose the processing step with the `DATASETS_BASED_ENDPOINT` env var. | Merge the workers that rely on the datasets library: The docker image size for a worker based on the datasets library is large, mainly due to two dependencies: PyTorch and TensorFlow (about 4GB).
We create /workers/datasets_based for all the workers that depend on datasets and chose the processing step with the `DATASETS_BASED_ENDPOINT` env var. | closed | 2022-12-01T10:22:38Z | 2022-12-01T12:29:34Z | 2022-12-01T12:29:33Z | severo |
1,469,599,513 | Add a new endpoint: /features | The features are already provided under /first-rows, but some clients, like Autotrain (cc @SBrandeis), are interested in having access to the features, without the first rows.
| Add a new endpoint: /features: The features are already provided under /first-rows, but some clients, like Autotrain (cc @SBrandeis), are interested in having access to the features, without the first rows.
| closed | 2022-11-30T12:59:58Z | 2024-02-02T16:58:06Z | 2024-02-02T16:58:06Z | severo |
1,468,174,220 | Simplify docker | null | Simplify docker: | closed | 2022-11-29T14:36:15Z | 2022-11-29T14:53:13Z | 2022-11-29T14:53:12Z | severo |
1,467,920,824 | feat: πΈ cancel-jobs must be a POST request, not a GET | null | feat: πΈ cancel-jobs must be a POST request, not a GET: | closed | 2022-11-29T11:45:54Z | 2022-11-29T11:48:08Z | 2022-11-29T11:48:07Z | severo |
1,467,906,410 | Fix ask access | null | Fix ask access: | closed | 2022-11-29T11:33:54Z | 2022-11-29T11:44:28Z | 2022-11-29T11:44:27Z | severo |
1,467,199,481 | feat: πΈ add parquet worker | missing:
- [x] unit test: download the parquet files and ensure the data are the same
- [x] docker compose
- [x] e2e
- [x] Helm
- [x] README, DEVELOPER GUIDE
- [x] fix an issue: we receive an "update" webhook every time we write to the `ref/convert/parquet` branch, which generates a new job -> updates are looping. It's OK to receive the webhook, but we should skip the job because the git version of `main` and the worker version should be the same as the cached entry. Fixed with https://github.com/huggingface/datasets-server/pull/651/commits/56614570e15a1c089286ed70d1f7cb5b0904cac7
- [x] docs
- [x] openapi
- [x] fix e2e: the sum of all the docker images is too large for the Github Actions storage, e2e action breaks.
some examples:
- [x] monitor the storage used for datasets cache, and possibly delete cache after some time. Internal chat: https://huggingface.slack.com/archives/C01TETY5V8S/p1669805539127299. For now, prod has access to 4TB, which gives sufficient space to start. We should delete the cache data for every dataset after a job.
First datasets with parquet files:
- `duorc`: [files](https://huggingface.co/datasets/duorc/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=duorc)
- `rotten_tomatoes`: [files](https://huggingface.co/datasets/rotten_tomatoes/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=rotten_tomatoes)
- `severo/mnist`: [files](https://huggingface.co/datasets/severo/mnist/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=severo/mnist)
- `severo/glue`: [files](https://huggingface.co/datasets/severo/glue/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=severo/glue)
- `severo/danish-wit`: [files](https://huggingface.co/datasets/severo/danish-wit/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=severo/danish-wit)
Issues:
- [ ] https://huggingface.co/datasets/Elite35P-Server/EliteVoiceProject gives a 500 error (it's a gated dataset, and we get a "forbidden" error). It should have been ignored because it's not on the list of supported datasets. Update: `Elite35P-Server/EliteVoiceProject` is no more gated, btw.
Same with `SciSearch/wiki-data`, `TREC-AToMiC/AToMiC-Images-v0.1`, `TREC-AToMiC/AToMiC-Qrels-v0.1`, `TREC-AToMiC/AToMiC-Texts-v0.1`, `bigcode/the-stack` (it has extra fields), `bigcode/the-stack-dedup` (same), `mitclinicalml/clinical-ie`
- [ ] `severo/embellishments` and `severo/wit`: "An error occurred while generating the dataset"
- [ ] `severo/winogavil`: `Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/nlphuji/winogavil'. Use `repo_type` argument if needed.`???
once deployed:
- [x] allow parquet conversion for all the datasets (see https://github.com/huggingface/datasets-server/blob/worker-parquet/chart/env/prod.yaml#L197)
- [x] increase the resources (number of nodes) in production
- [x] rapid API: https://rapidapi.com/studio/api_0971eb45-d045-4337-be04-cec27c1122b0/client
- [x] postman: https://www.postman.com/winter-station-148777/workspace/hugging-face-apis/collection/23242779-d068584e-96d1-4d92-a703-7cb12cbd8053
| feat: πΈ add parquet worker: missing:
- [x] unit test: download the parquet files and ensure the data are the same
- [x] docker compose
- [x] e2e
- [x] Helm
- [x] README, DEVELOPER GUIDE
- [x] fix an issue: we receive an "update" webhook every time we write to the `ref/convert/parquet` branch, which generates a new job -> updates are looping. It's OK to receive the webhook, but we should skip the job because the git version of `main` and the worker version should be the same as the cached entry. Fixed with https://github.com/huggingface/datasets-server/pull/651/commits/56614570e15a1c089286ed70d1f7cb5b0904cac7
- [x] docs
- [x] openapi
- [x] fix e2e: the sum of all the docker images is too large for the Github Actions storage, e2e action breaks.
some examples:
- [x] monitor the storage used for datasets cache, and possibly delete cache after some time. Internal chat: https://huggingface.slack.com/archives/C01TETY5V8S/p1669805539127299. For now, prod has access to 4TB, which gives sufficient space to start. We should delete the cache data for every dataset after a job.
First datasets with parquet files:
- `duorc`: [files](https://huggingface.co/datasets/duorc/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=duorc)
- `rotten_tomatoes`: [files](https://huggingface.co/datasets/rotten_tomatoes/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=rotten_tomatoes)
- `severo/mnist`: [files](https://huggingface.co/datasets/severo/mnist/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=severo/mnist)
- `severo/glue`: [files](https://huggingface.co/datasets/severo/glue/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=severo/glue)
- `severo/danish-wit`: [files](https://huggingface.co/datasets/severo/danish-wit/tree/refs%2Fconvert%2Fparquet), [API](https://datasets-server.huggingface.co/parquet?dataset=severo/danish-wit)
Issues:
- [ ] https://huggingface.co/datasets/Elite35P-Server/EliteVoiceProject gives a 500 error (it's a gated dataset, and we get a "forbidden" error). It should have been ignored because it's not on the list of supported datasets. Update: `Elite35P-Server/EliteVoiceProject` is no more gated, btw.
Same with `SciSearch/wiki-data`, `TREC-AToMiC/AToMiC-Images-v0.1`, `TREC-AToMiC/AToMiC-Qrels-v0.1`, `TREC-AToMiC/AToMiC-Texts-v0.1`, `bigcode/the-stack` (it has extra fields), `bigcode/the-stack-dedup` (same), `mitclinicalml/clinical-ie`
- [ ] `severo/embellishments` and `severo/wit`: "An error occurred while generating the dataset"
- [ ] `severo/winogavil`: `Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/nlphuji/winogavil'. Use `repo_type` argument if needed.`???
once deployed:
- [x] allow parquet conversion for all the datasets (see https://github.com/huggingface/datasets-server/blob/worker-parquet/chart/env/prod.yaml#L197)
- [x] increase the resources (number of nodes) in production
- [x] rapid API: https://rapidapi.com/studio/api_0971eb45-d045-4337-be04-cec27c1122b0/client
- [x] postman: https://www.postman.com/winter-station-148777/workspace/hugging-face-apis/collection/23242779-d068584e-96d1-4d92-a703-7cb12cbd8053
| closed | 2022-11-28T22:52:13Z | 2022-12-13T10:52:32Z | 2022-12-09T12:57:24Z | severo |
1,466,592,631 | Implement generic processing steps | ### Generic implementation of a processing graph
Remove explicit mentions to /splits or /first-rows from code, and move them to the "processing graph":
```json
{
"/splits": {"input_type": "dataset", "required_by_dataset_viewer": true},
"/first-rows": {"input_type": "split", "requires": "/splits", "required_by_dataset_viewer": true},
}
```
This JSON (see libcommon.config) defines the *processing steps* (here /splits and /first-rows) and their dependency relationship (here /first-rows depends on /splits). It also defines if a processing step is required by the Hub dataset viewer (used to fill /valid and /is-valid).
A processing step is defined by the endpoint (/splits, /first-rows), where the result of the processing step can be downloaded. The endpoint value is also used as the cache key and the job type.
After this change, adding a new processing step should consist in:
- creating a new worker in the `workers/` directory
- update the processing graph
- update the CI, tests, docs and deployment (docker-compose files, e2e tests, docs, openapi, helm chart)
This also means that the services (API, admin) don't contain any code mentioning directly splits or first-rows. And the splits worker does not contain direct reference to first-rows.
### Other changes
- code: the libcache and libqueue libraries have been merged into libcommon
- the code to check if a dataset is supported (exists, is not private, access can be programmatically obtained if gated) has been factorized and is now used before every processing step and before even accepting to create a new job (through the webhook or through the /admin/force-refresh endpoint).
- add a new endpoint: /admin/cancel-jobs, which replaces the last admin scripts. It's easier to send a POST request than to call a remote script.
- simplify the code of the workers by factorizing some code into libcommon:
- the code to test if a job should be skipped, based on the versions of the git repository and the worker
- the logic to catch errors and to write to the cache
This way, the code for every worker now only contains what is specific to that worker.
### Breaking changes
- env vars `QUEUE_MAX_LOAD_PCT`, `QUEUE_MAX_MEMORY_PCT` and `QUEUE_SLEEP_SECONDS` are renamed as `WORKER_MAX_LOAD_PCT`, `WORKER_MAX_MEMORY_PCT` and `WORKER_SLEEP_SECONDS`. | Implement generic processing steps: ### Generic implementation of a processing graph
Remove explicit mentions to /splits or /first-rows from code, and move them to the "processing graph":
```json
{
"/splits": {"input_type": "dataset", "required_by_dataset_viewer": true},
"/first-rows": {"input_type": "split", "requires": "/splits", "required_by_dataset_viewer": true},
}
```
This JSON (see libcommon.config) defines the *processing steps* (here /splits and /first-rows) and their dependency relationship (here /first-rows depends on /splits). It also defines if a processing step is required by the Hub dataset viewer (used to fill /valid and /is-valid).
A processing step is defined by the endpoint (/splits, /first-rows), where the result of the processing step can be downloaded. The endpoint value is also used as the cache key and the job type.
After this change, adding a new processing step should consist in:
- creating a new worker in the `workers/` directory
- update the processing graph
- update the CI, tests, docs and deployment (docker-compose files, e2e tests, docs, openapi, helm chart)
This also means that the services (API, admin) don't contain any code mentioning directly splits or first-rows. And the splits worker does not contain direct reference to first-rows.
### Other changes
- code: the libcache and libqueue libraries have been merged into libcommon
- the code to check if a dataset is supported (exists, is not private, access can be programmatically obtained if gated) has been factorized and is now used before every processing step and before even accepting to create a new job (through the webhook or through the /admin/force-refresh endpoint).
- add a new endpoint: /admin/cancel-jobs, which replaces the last admin scripts. It's easier to send a POST request than to call a remote script.
- simplify the code of the workers by factorizing some code into libcommon:
- the code to test if a job should be skipped, based on the versions of the git repository and the worker
- the logic to catch errors and to write to the cache
This way, the code for every worker now only contains what is specific to that worker.
### Breaking changes
- env vars `QUEUE_MAX_LOAD_PCT`, `QUEUE_MAX_MEMORY_PCT` and `QUEUE_SLEEP_SECONDS` are renamed as `WORKER_MAX_LOAD_PCT`, `WORKER_MAX_MEMORY_PCT` and `WORKER_SLEEP_SECONDS`. | closed | 2022-11-28T15:08:38Z | 2022-11-28T21:47:21Z | 2022-11-28T21:47:20Z | severo |
1,466,362,303 | Small tweaks on Helm charts | - Ignore env files in packaged charts
- Add ability to completely specify `ingress.tls`
- Automatic value for `token.secretName` if not specified
- Do not submit `mongodb-migration` job if image not specified
- Add ability to specify `imagePullPolicy` | Small tweaks on Helm charts: - Ignore env files in packaged charts
- Add ability to completely specify `ingress.tls`
- Automatic value for `token.secretName` if not specified
- Do not submit `mongodb-migration` job if image not specified
- Add ability to specify `imagePullPolicy` | closed | 2022-11-28T12:43:30Z | 2023-01-02T11:30:31Z | 2023-01-02T11:30:30Z | n1t0 |
1,463,723,325 | Show which files in the datasets are scanned as unsafe | Hello, HF datasets show that at least one files in [our datasets](https://huggingface.co/datasets/poloclub/diffusiondb) are scanned as unsafe. There are more than 16,000 files in the repo, so it's quite difficult to find which file is scanned as unsafe. It would be great if the warning message tells which file is scanned as unsafe.
<img width="1595" alt="image" src="https://user-images.githubusercontent.com/15007159/203851476-8fbf49ce-988c-4362-98cf-e0ef34d8e4f0.png">
| Show which files in the datasets are scanned as unsafe: Hello, HF datasets show that at least one files in [our datasets](https://huggingface.co/datasets/poloclub/diffusiondb) are scanned as unsafe. There are more than 16,000 files in the repo, so it's quite difficult to find which file is scanned as unsafe. It would be great if the warning message tells which file is scanned as unsafe.
<img width="1595" alt="image" src="https://user-images.githubusercontent.com/15007159/203851476-8fbf49ce-988c-4362-98cf-e0ef34d8e4f0.png">
| closed | 2022-11-24T19:07:20Z | 2022-12-01T19:37:05Z | 2022-12-01T08:40:28Z | xiaohk |
1,463,277,821 | fix: π install missing dependency | null | fix: π install missing dependency: | closed | 2022-11-24T12:33:44Z | 2022-11-24T12:48:10Z | 2022-11-24T12:48:09Z | severo |
1,463,215,926 | feat: πΈ upgrade to datasets 2.7.1 | see
https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/1 | feat: πΈ upgrade to datasets 2.7.1: see
https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/1 | closed | 2022-11-24T11:42:18Z | 2022-11-24T12:06:22Z | 2022-11-24T12:06:21Z | severo |
1,462,506,117 | Replace safety with pip audit | https://github.com/pyupio/safety is updated only once per month (you have to pay to have more frequent updates). https://github.com/pypa/pip-audit has fewer stars (684 against 1.4k) but use open data and is maintained by https://github.com/pypa.
Note that we upgrade poetry to 1.2.2 in this PR.
We have some issues with pip-audit (see below), and the fixes are a bit hacky (editing the requirements.txt file with `sed`). Ideally, it would be managed in a proper poetry plugin (see https://github.com/opeco17/poetry-audit-plugin/ for a plugin based on safety), but I think it's not worth creating a new repo for now.
---
Some notes:
- pip-audit suggests poetry to add `poetry audit`: https://github.com/pypa/pip-audit/issues/84
- poetry suggests people that want to audit their dependencies to use `pip-audit` or to create a poetry plugin to do so: https://github.com/python-poetry/poetry/issues/6220
- a poetry plugin called https://github.com/opeco17/poetry-audit-plugin is based on safety, not pip-audit
That's why we do:
```bash
bash -c 'poetry run pip-audit -r <(poetry export -f requirements.txt --with dev)'
```
We still have an issue, though: the requirements.txt file contains duplicates when the same package is required both with and without "extras", e.g. with requests (which is not considered a bug by poetry: https://github.com/python-poetry/poetry-plugin-export/issues/129, https://github.com/python-poetry/poetry-plugin-export/issues/157, reason: https://github.com/python-poetry/poetry/pull/5688#issuecomment-1137845130):
```
requests==2.28.1 ; python_full_version == "3.9.6" \
--hash=sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983 \
--hash=sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349
requests[socks]==2.28.1 ; python_full_version == "3.9.6" \
--hash=sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983 \
--hash=sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349
```
but pip-audit fails in this case:
```
ERROR:pip_audit._cli:package requests has duplicate requirements: requests[socks]==2.28.1 (from RequirementLine(line_number=1992, line='requests[socks]==2.28.1 ; python_full_version == "3.9.6" --hash=sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983 --hash=sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349', filename=PosixPath('/dev/fd/63')))
```
I added a comment here: https://github.com/pypa/pip-audit/issues/84#issuecomment-1326203111 | Replace safety with pip audit: https://github.com/pyupio/safety is updated only once per month (you have to pay to have more frequent updates). https://github.com/pypa/pip-audit has fewer stars (684 against 1.4k) but use open data and is maintained by https://github.com/pypa.
Note that we upgrade poetry to 1.2.2 in this PR.
We have some issues with pip-audit (see below), and the fixes are a bit hacky (editing the requirements.txt file with `sed`). Ideally, it would be managed in a proper poetry plugin (see https://github.com/opeco17/poetry-audit-plugin/ for a plugin based on safety), but I think it's not worth creating a new repo for now.
---
Some notes:
- pip-audit suggests poetry to add `poetry audit`: https://github.com/pypa/pip-audit/issues/84
- poetry suggests people that want to audit their dependencies to use `pip-audit` or to create a poetry plugin to do so: https://github.com/python-poetry/poetry/issues/6220
- a poetry plugin called https://github.com/opeco17/poetry-audit-plugin is based on safety, not pip-audit
That's why we do:
```bash
bash -c 'poetry run pip-audit -r <(poetry export -f requirements.txt --with dev)'
```
We still have an issue, though: the requirements.txt file contains duplicates when the same package is required both with and without "extras", e.g. with requests (which is not considered a bug by poetry: https://github.com/python-poetry/poetry-plugin-export/issues/129, https://github.com/python-poetry/poetry-plugin-export/issues/157, reason: https://github.com/python-poetry/poetry/pull/5688#issuecomment-1137845130):
```
requests==2.28.1 ; python_full_version == "3.9.6" \
--hash=sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983 \
--hash=sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349
requests[socks]==2.28.1 ; python_full_version == "3.9.6" \
--hash=sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983 \
--hash=sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349
```
but pip-audit fails in this case:
```
ERROR:pip_audit._cli:package requests has duplicate requirements: requests[socks]==2.28.1 (from RequirementLine(line_number=1992, line='requests[socks]==2.28.1 ; python_full_version == "3.9.6" --hash=sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983 --hash=sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349', filename=PosixPath('/dev/fd/63')))
```
I added a comment here: https://github.com/pypa/pip-audit/issues/84#issuecomment-1326203111 | closed | 2022-11-23T22:51:44Z | 2022-11-24T11:24:05Z | 2022-11-24T11:24:04Z | severo |
1,458,113,614 | feat: πΈ upgrade datasets | null | feat: πΈ upgrade datasets: | closed | 2022-11-21T15:15:37Z | 2022-11-21T16:01:53Z | 2022-11-21T16:01:52Z | severo |
1,451,814,375 | feat: πΈ upgrade huggingface_hub to 0.11.0 | null | feat: πΈ upgrade huggingface_hub to 0.11.0: | closed | 2022-11-16T15:31:13Z | 2022-11-16T16:15:05Z | 2022-11-16T16:15:04Z | severo |
1,451,519,771 | Force job | - it automatically forces the refresh of a /first-rows entry if it's missing but should exist
- it adds two endpoints (behind authentication) to force the refresh of cache entries:
- `POST /admin/force-refresh/splits?dataset={dataset}`
- `POST /admin/force-refresh/first-rows?dataset={dataset}&config={config}&split={split}`
See #640 | Force job: - it automatically forces the refresh of a /first-rows entry if it's missing but should exist
- it adds two endpoints (behind authentication) to force the refresh of cache entries:
- `POST /admin/force-refresh/splits?dataset={dataset}`
- `POST /admin/force-refresh/first-rows?dataset={dataset}&config={config}&split={split}`
See #640 | closed | 2022-11-16T12:32:26Z | 2022-11-16T14:37:00Z | 2022-11-16T14:36:59Z | severo |
1,451,344,678 | Revert "Update pr docs actions" | Reverts huggingface/datasets-server#632
Everything is handled on the doc-builder side now π | Revert "Update pr docs actions": Reverts huggingface/datasets-server#632
Everything is handled on the doc-builder side now π | closed | 2022-11-16T10:44:48Z | 2022-11-16T10:48:41Z | 2022-11-16T10:44:53Z | mishig25 |
1,451,258,295 | Add a way to force refresh a cache entry | See, for example, https://huggingface.co/datasets/poloclub/diffusiondb/discussions/2#63740c982b908db63370bcc0.
Some of the configs have an error (probably) due to rate limiting on the data hosting.
We need to be able to force refresh them individually. Note that triggering the webhook does not work: as the worker version and the git commit have not changed, the job is skipped. | Add a way to force refresh a cache entry: See, for example, https://huggingface.co/datasets/poloclub/diffusiondb/discussions/2#63740c982b908db63370bcc0.
Some of the configs have an error (probably) due to rate limiting on the data hosting.
We need to be able to force refresh them individually. Note that triggering the webhook does not work: as the worker version and the git commit have not changed, the job is skipped. | closed | 2022-11-16T09:52:31Z | 2022-11-16T15:09:30Z | 2022-11-16T15:09:29Z | severo |
1,451,236,157 | feat: πΈ update dependencies to fix vulnerabilities | null | feat: πΈ update dependencies to fix vulnerabilities: | closed | 2022-11-16T09:38:45Z | 2022-11-16T09:53:26Z | 2022-11-16T09:53:26Z | severo |
1,450,473,235 | fix: π fix the truncation | see https://github.com/huggingface/datasets-server/issues/637 | fix: π fix the truncation: see https://github.com/huggingface/datasets-server/issues/637 | closed | 2022-11-15T22:31:08Z | 2022-11-16T08:58:47Z | 2022-11-16T08:58:46Z | severo |
1,449,368,948 | Dataset Viewer issue for alexandrainst/danish-wit | ### Link
https://huggingface.co/datasets/alexandrainst/danish-wit
### Description
The dataset viewer is not working for dataset alexandrainst/danish-wit.
Error details:
```
Error code: ClientConnectionError
```
| Dataset Viewer issue for alexandrainst/danish-wit: ### Link
https://huggingface.co/datasets/alexandrainst/danish-wit
### Description
The dataset viewer is not working for dataset alexandrainst/danish-wit.
Error details:
```
Error code: ClientConnectionError
```
| closed | 2022-11-15T08:23:58Z | 2022-11-16T09:23:53Z | 2022-11-15T16:00:41Z | saattrupdan |
1,444,527,079 | Add migration job | null | Add migration job: | closed | 2022-11-10T20:35:55Z | 2022-11-15T14:11:17Z | 2022-11-15T14:11:15Z | severo |
1,443,427,449 | Standardize Helms Charts | - Extract secret to use the helm chart without already existing secrets
- Abstract storage to PV/PVC
- Some helm refactoring
cc: @n1t0 | Standardize Helms Charts: - Extract secret to use the helm chart without already existing secrets
- Abstract storage to PV/PVC
- Some helm refactoring
cc: @n1t0 | closed | 2022-11-10T07:58:15Z | 2022-11-17T16:53:22Z | 2022-11-17T16:53:21Z | XciD |
1,442,111,535 | Refactor common cache entry | null | Refactor common cache entry: | closed | 2022-11-09T13:32:18Z | 2022-11-18T12:57:07Z | 2022-11-18T12:12:30Z | severo |
1,440,497,737 | ci: π‘ remove the token for codecov since the repo is public | null | ci: π‘ remove the token for codecov since the repo is public: | closed | 2022-11-08T16:12:23Z | 2022-11-08T19:29:03Z | 2022-11-08T19:29:02Z | severo |
1,440,390,117 | Update pr docs actions | null | Update pr docs actions: | closed | 2022-11-08T15:12:16Z | 2022-11-08T15:50:38Z | 2022-11-08T15:50:38Z | mishig25 |
1,439,863,960 | Parquet worker | <strike>Blocked by https://github.com/huggingface/huggingface_hub/issues/1165 (create the `refs/convert/parquet` ref)</strike>.
<strike>We'll maybe wait for hfh 0.11 instead of depending on the main branch</strike>
hfh 0.11.0 has been released, and we upgraded it in datasets-server: https://github.com/huggingface/datasets-server/pull/643
Limitations:
- gated datasets with "extra fields" are not supported
Wait for https://github.com/huggingface/datasets-server/pull/650, then rebase only to add the parquet implementation. | Parquet worker: <strike>Blocked by https://github.com/huggingface/huggingface_hub/issues/1165 (create the `refs/convert/parquet` ref)</strike>.
<strike>We'll maybe wait for hfh 0.11 instead of depending on the main branch</strike>
hfh 0.11.0 has been released, and we upgraded it in datasets-server: https://github.com/huggingface/datasets-server/pull/643
Limitations:
- gated datasets with "extra fields" are not supported
Wait for https://github.com/huggingface/datasets-server/pull/650, then rebase only to add the parquet implementation. | closed | 2022-11-08T09:57:52Z | 2023-01-23T10:35:20Z | 2022-11-28T22:52:46Z | severo |
1,438,415,177 | Dataset Viewer issue for alkzar90/NIH-Chest-X-ray-dataset | ### Link
https://huggingface.co/datasets/alkzar90/NIH-Chest-X-ray-dataset/viewer/image-classification/train
### Description
See the discussion https://discuss.huggingface.co/t/large-image-dataset-feedback-and-advice-data-viewer-task-template-and-more
For the `train` split, the dataset viewer mentions "no rows". It's incorrect: the issue, in that case, is that the train split cannot be loaded, but no error is raised.
More specifically, the first-rows worker tries to load the rows using the streaming mode, then if it does not work and if the split weights less than 100MB, it tries to load them using the normal mode (downloading all the data). In this case, the split weights more than 100MB, and the streaming mode fails, but does not error for some reason:
```python
>>> ds = load_dataset(path="alkzar90/NIH-Chest-X-ray-dataset", name="image-classification", split="train", streaming=True)
>>> next(iter(ds)) # <- fails, but it's not what is used in the code
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> import itertools
>>> list(itertools.islice(ds, 101)) # <- this is used in the code and does not raise
[]
```
Note that the normal mode works (but downloads GBs of data):
```python
>>> ds = load_dataset(path="alkzar90/NIH-Chest-X-ray-dataset", name="image-classification", split="train")
Downloading and preparing dataset nih-chest-x-ray-dataset/image-classification to /home/slesage/.cache/huggingface/datasets/alkzar90___nih-chest-x-ray-dataset/image-classification/1.0.0/e6f0b4e0a72a9fad5e268364fb11802e767c4565fb82a19d8283db4323ddf7b2...
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.99G/3.99G [01:46<00:00, 37.4MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.02G/4.02G [01:25<00:00, 47.0MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.02G/4.02G [01:21<00:00, 49.3MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.11G/4.11G [01:31<00:00, 44.9MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.18G/4.18G [01:22<00:00, 50.6MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.19G/4.19G [01:25<00:00, 48.8MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.91G/2.91G [00:52<00:00, 55.7MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [09:58<00:00, 49.84s/it]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [05:42<00:00, 28.58s/it]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββDataset nih-chest-x-ray-dataset downloaded and prepared to /home/slesage/.cache/huggingface/datasets/alkzar90___nih-chest-x-ray-dataset/image-classification/1.0.0/e6f0b4e0a72a9fad5e268364fb11802e767c4565fb82a19d8283db4323ddf7b2. Subsequent calls will reuse this data.
>>> ds
Dataset({
features: ['image', 'labels'],
num_rows: 86524
})
>>> ds[0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=1024x1024 at 0x7FBEC81F0CA0>, 'labels': [0]}
```
cc @alcazar90 @julien-c
Happy to get help from the @huggingface/datasets team to help fix the streaming issue | Dataset Viewer issue for alkzar90/NIH-Chest-X-ray-dataset: ### Link
https://huggingface.co/datasets/alkzar90/NIH-Chest-X-ray-dataset/viewer/image-classification/train
### Description
See the discussion https://discuss.huggingface.co/t/large-image-dataset-feedback-and-advice-data-viewer-task-template-and-more
For the `train` split, the dataset viewer mentions "no rows". It's incorrect: the issue, in that case, is that the train split cannot be loaded, but no error is raised.
More specifically, the first-rows worker tries to load the rows using the streaming mode, then if it does not work and if the split weights less than 100MB, it tries to load them using the normal mode (downloading all the data). In this case, the split weights more than 100MB, and the streaming mode fails, but does not error for some reason:
```python
>>> ds = load_dataset(path="alkzar90/NIH-Chest-X-ray-dataset", name="image-classification", split="train", streaming=True)
>>> next(iter(ds)) # <- fails, but it's not what is used in the code
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> import itertools
>>> list(itertools.islice(ds, 101)) # <- this is used in the code and does not raise
[]
```
Note that the normal mode works (but downloads GBs of data):
```python
>>> ds = load_dataset(path="alkzar90/NIH-Chest-X-ray-dataset", name="image-classification", split="train")
Downloading and preparing dataset nih-chest-x-ray-dataset/image-classification to /home/slesage/.cache/huggingface/datasets/alkzar90___nih-chest-x-ray-dataset/image-classification/1.0.0/e6f0b4e0a72a9fad5e268364fb11802e767c4565fb82a19d8283db4323ddf7b2...
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.99G/3.99G [01:46<00:00, 37.4MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.02G/4.02G [01:25<00:00, 47.0MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.02G/4.02G [01:21<00:00, 49.3MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.11G/4.11G [01:31<00:00, 44.9MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.18G/4.18G [01:22<00:00, 50.6MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.19G/4.19G [01:25<00:00, 48.8MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.91G/2.91G [00:52<00:00, 55.7MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [09:58<00:00, 49.84s/it]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [05:42<00:00, 28.58s/it]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββDataset nih-chest-x-ray-dataset downloaded and prepared to /home/slesage/.cache/huggingface/datasets/alkzar90___nih-chest-x-ray-dataset/image-classification/1.0.0/e6f0b4e0a72a9fad5e268364fb11802e767c4565fb82a19d8283db4323ddf7b2. Subsequent calls will reuse this data.
>>> ds
Dataset({
features: ['image', 'labels'],
num_rows: 86524
})
>>> ds[0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=1024x1024 at 0x7FBEC81F0CA0>, 'labels': [0]}
```
cc @alcazar90 @julien-c
Happy to get help from the @huggingface/datasets team to help fix the streaming issue | closed | 2022-11-07T14:01:00Z | 2022-11-07T14:17:13Z | 2022-11-07T14:16:50Z | severo |
1,438,119,754 | Index the (text) datasets contents to enable full-text search | See an example by @ola13 on the ROOTS corpus: https://huggingface.co/spaces/bigscience-data/roots-search.
Implementation details are here: https://huggingface.co/spaces/bigscience-data/scisearch/resolve/main/roots_search_tool_specs.pdf
Internal discussion: https://huggingface.slack.com/archives/C0311GZ7R6K/p1667586339997379 | Index the (text) datasets contents to enable full-text search: See an example by @ola13 on the ROOTS corpus: https://huggingface.co/spaces/bigscience-data/roots-search.
Implementation details are here: https://huggingface.co/spaces/bigscience-data/scisearch/resolve/main/roots_search_tool_specs.pdf
Internal discussion: https://huggingface.slack.com/archives/C0311GZ7R6K/p1667586339997379 | closed | 2022-11-07T10:25:16Z | 2023-08-11T12:21:04Z | 2023-08-11T12:21:04Z | severo |
1,434,926,581 | Handle non-decoded image features | ### Link
https://huggingface.co/datasets/society-ethics/LILA
### Description
The dataset viewer expects images to be PIL images, but since the HF Datasets library allows non-decoded images (with `decode=False`) this is not always the case. At the moment, if an image feature is not decoded, the viewer fails with the following error in the UI:
```
Error code: RowsPostProcessingError
```
In the backend, the error is:

| Handle non-decoded image features: ### Link
https://huggingface.co/datasets/society-ethics/LILA
### Description
The dataset viewer expects images to be PIL images, but since the HF Datasets library allows non-decoded images (with `decode=False`) this is not always the case. At the moment, if an image feature is not decoded, the viewer fails with the following error in the UI:
```
Error code: RowsPostProcessingError
```
In the backend, the error is:

| closed | 2022-11-03T16:14:04Z | 2022-12-12T15:04:10Z | 2022-12-12T15:04:10Z | NimaBoscarino |
1,425,595,221 | feat: πΈ change mongo indexes (following cloud recommendations) | also: try to rename the jobs collections, since it's ignored for now | feat: πΈ change mongo indexes (following cloud recommendations): also: try to rename the jobs collections, since it's ignored for now | closed | 2022-10-27T12:57:31Z | 2022-10-27T13:27:22Z | 2022-10-27T13:27:21Z | severo |
1,425,373,710 | Limit the started jobs per "dataset namespace" | The env var `MAX_JOBS_PER_DATASET` is rename `MAX_JOBS_PER_NAMESPACE`.
Also: select the next job among the namespaces with the least number of started jobs in order to avoid having all the workers dedicated to the same user if others are waiting. The namespace is the user, the org, or the dataset name for canonical datasets.
**Deployment:** the collection is now called jobs_blue (see blue/green deployment). Once deployed, look at the old "jobs" collection, relaunch the jobs for the waiting or started datasets, then delete the "jobs" collection. | Limit the started jobs per "dataset namespace": The env var `MAX_JOBS_PER_DATASET` is rename `MAX_JOBS_PER_NAMESPACE`.
Also: select the next job among the namespaces with the least number of started jobs in order to avoid having all the workers dedicated to the same user if others are waiting. The namespace is the user, the org, or the dataset name for canonical datasets.
**Deployment:** the collection is now called jobs_blue (see blue/green deployment). Once deployed, look at the old "jobs" collection, relaunch the jobs for the waiting or started datasets, then delete the "jobs" collection. | closed | 2022-10-27T10:03:55Z | 2022-10-27T12:28:44Z | 2022-10-27T12:28:43Z | severo |
1,424,116,276 | feat: πΈ only sleep for 5 seconds | null | feat: πΈ only sleep for 5 seconds: | closed | 2022-10-26T14:16:54Z | 2022-10-26T14:17:21Z | 2022-10-26T14:17:20Z | severo |
1,423,013,696 | Store and compare worker+dataset repo versions | to implement #545
- [x] make the worker aware of its own version
- [x] store the worker version and the datasets commit hash in the cache
- [x] perform the three checks before launching (or not) the update of the cache | Store and compare worker+dataset repo versions: to implement #545
- [x] make the worker aware of its own version
- [x] store the worker version and the datasets commit hash in the cache
- [x] perform the three checks before launching (or not) the update of the cache | closed | 2022-10-25T20:12:43Z | 2022-10-26T14:08:50Z | 2022-10-26T14:08:49Z | severo |
1,422,887,147 | feat: πΈ sort the configs alphabetically | see #614 | feat: πΈ sort the configs alphabetically: see #614 | closed | 2022-10-25T18:19:18Z | 2022-10-25T19:11:49Z | 2022-10-25T19:11:48Z | severo |
1,422,692,351 | fix: π fix hf-token | the token was not extracted and passed to the applications | fix: π fix hf-token: the token was not extracted and passed to the applications | closed | 2022-10-25T15:45:50Z | 2022-10-25T15:52:24Z | 2022-10-25T15:52:13Z | severo |
1,422,410,439 | test: π missing change in e2e | null | test: π missing change in e2e: | closed | 2022-10-25T12:48:14Z | 2022-10-25T14:24:17Z | 2022-10-25T14:24:16Z | severo |
1,422,142,813 | Fix api metrics | null | Fix api metrics: | closed | 2022-10-25T09:20:30Z | 2022-10-25T09:38:44Z | 2022-10-25T09:38:43Z | severo |
1,422,085,901 | fix: π mount the assets directory | null | fix: π mount the assets directory: | closed | 2022-10-25T08:38:33Z | 2022-10-25T08:39:04Z | 2022-10-25T08:39:03Z | severo |
1,420,993,807 | Fix metrics | null | Fix metrics: | closed | 2022-10-24T15:11:10Z | 2022-10-24T19:05:38Z | 2022-10-24T19:05:37Z | severo |
1,420,709,389 | /admin/metrics seems to be broken | <img width="897" alt="Capture dβeΜcran 2022-10-24 aΜ 13 53 08" src="https://user-images.githubusercontent.com/1676121/197519588-65e8d69f-41b9-4384-b40e-e208f8d6aa27.png">
| /admin/metrics seems to be broken: <img width="897" alt="Capture dβeΜcran 2022-10-24 aΜ 13 53 08" src="https://user-images.githubusercontent.com/1676121/197519588-65e8d69f-41b9-4384-b40e-e208f8d6aa27.png">
| closed | 2022-10-24T11:53:59Z | 2022-10-25T09:40:56Z | 2022-10-25T09:40:56Z | severo |
1,418,431,335 | Details | null | Details: | closed | 2022-10-21T14:31:02Z | 2022-10-21T15:40:14Z | 2022-10-21T15:40:13Z | severo |
1,418,325,910 | refactor: π‘ setup everything in the configs | the connection to the databases, the logging configuration and the creation of the assets directory are done automatically just after getting the configuration, removing the need to do it when starting an app. Also: simplify the use of logging, by just calling the logging.xxx() functions directly, instead of using a custom logger. | refactor: π‘ setup everything in the configs: the connection to the databases, the logging configuration and the creation of the assets directory are done automatically just after getting the configuration, removing the need to do it when starting an app. Also: simplify the use of logging, by just calling the logging.xxx() functions directly, instead of using a custom logger. | closed | 2022-10-21T13:14:32Z | 2022-10-21T13:55:26Z | 2022-10-21T13:55:25Z | severo |
1,418,245,015 | [feat req] Alphabetical ordering for splits in dataset viewer | ### Link
https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0
### Description
Currently, the datasets splits for the viewer are displayed in a seemingly random order, see example for [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0):
<img width="1505" alt="Screenshot 2022-10-21 at 14 04 39" src="https://user-images.githubusercontent.com/93869735/197192381-46ca4041-db69-423e-be55-abf96e70167a.png">
It would be easier to traverse the list of possible splits if they were arranged alphabetically!
| [feat req] Alphabetical ordering for splits in dataset viewer: ### Link
https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0
### Description
Currently, the datasets splits for the viewer are displayed in a seemingly random order, see example for [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0):
<img width="1505" alt="Screenshot 2022-10-21 at 14 04 39" src="https://user-images.githubusercontent.com/93869735/197192381-46ca4041-db69-423e-be55-abf96e70167a.png">
It would be easier to traverse the list of possible splits if they were arranged alphabetically!
| closed | 2022-10-21T12:11:00Z | 2022-10-26T09:48:29Z | 2022-10-25T19:22:17Z | sanchit-gandhi |
1,418,069,140 | feat: πΈ change the number of pods | null | feat: πΈ change the number of pods: | closed | 2022-10-21T09:49:33Z | 2022-10-21T09:49:56Z | 2022-10-21T09:49:55Z | severo |
1,416,948,692 | Manage the environment variables and configuration more robustly | null | Manage the environment variables and configuration more robustly : | closed | 2022-10-20T16:47:18Z | 2022-10-21T09:46:55Z | 2022-10-21T09:46:54Z | severo |
1,411,921,140 | feat: πΈ remove obsolete DATASETS_REVISION | Now that the canonical datasets are loaded from the Hub, DATASETS_REVISION (or HF_SCRIPTS_VERSION in datasets) is useless. | feat: πΈ remove obsolete DATASETS_REVISION: Now that the canonical datasets are loaded from the Hub, DATASETS_REVISION (or HF_SCRIPTS_VERSION in datasets) is useless. | closed | 2022-10-17T17:09:25Z | 2022-10-17T20:00:22Z | 2022-10-17T20:00:21Z | severo |
1,411,757,796 | feat: πΈ fix vulnerabilities by upgrading tensorflow | null | feat: πΈ fix vulnerabilities by upgrading tensorflow: | closed | 2022-10-17T15:11:24Z | 2022-10-17T15:43:45Z | 2022-10-17T15:43:44Z | severo |
1,408,992,222 | feat: πΈ 8 splits workers | null | feat: πΈ 8 splits workers: | closed | 2022-10-14T08:41:42Z | 2022-10-14T08:41:49Z | 2022-10-14T08:41:48Z | severo |
1,407,755,327 | feat: πΈ make the queue agnostic to the types of jobs | Before we had two collections: for splits and for first-rows jobs. Now only one collection name "jobs", with a field "type". Note that the job arguments are still restricted to dataset (required) and optionally config and split.
BREAKING CHANGE: 𧨠two collections are removed and a new one is created. The function names have changed too. | feat: πΈ make the queue agnostic to the types of jobs: Before we had two collections: for splits and for first-rows jobs. Now only one collection name "jobs", with a field "type". Note that the job arguments are still restricted to dataset (required) and optionally config and split.
BREAKING CHANGE: 𧨠two collections are removed and a new one is created. The function names have changed too. | closed | 2022-10-13T12:56:35Z | 2022-10-17T17:02:49Z | 2022-10-17T15:04:07Z | severo |
1,405,137,923 | feat: πΈ upgrade hub webhook client to v2 | null | feat: πΈ upgrade hub webhook client to v2: | closed | 2022-10-11T19:39:42Z | 2022-10-11T20:33:00Z | 2022-10-11T20:33:00Z | severo |
1,403,735,793 | test: π add tests for missing fields and None value | also: use JSONL file for tests | test: π add tests for missing fields and None value: also: use JSONL file for tests | closed | 2022-10-10T21:35:13Z | 2022-10-11T09:40:41Z | 2022-10-11T09:40:41Z | severo |
1,403,648,014 | fix: π fix tests for the Sequence cells | follow-up to #603 | fix: π fix tests for the Sequence cells: follow-up to #603 | closed | 2022-10-10T20:00:57Z | 2022-10-10T20:15:53Z | 2022-10-10T20:15:52Z | severo |
1,403,189,237 | chore: π€ upgrade safety | null | chore: π€ upgrade safety: | closed | 2022-10-10T13:34:09Z | 2022-10-10T13:53:01Z | 2022-10-10T13:53:00Z | severo |
1,403,036,275 | Support Sequence of dicts | See #602 | Support Sequence of dicts: See #602 | closed | 2022-10-10T11:40:47Z | 2022-10-10T13:30:09Z | 2022-10-10T13:30:08Z | severo |
1,402,844,454 | RowsPostProcessingError in the viewer of several datasets | We find a `RowsPostProcessingError` in the viewer of several datasets:
```
Server error while post-processing the split rows. Please report the issue.
Error code: RowsPostProcessingError
```
- https://huggingface.co/datasets/multi_woz_v22
- https://huggingface.co/datasets/qasper | RowsPostProcessingError in the viewer of several datasets: We find a `RowsPostProcessingError` in the viewer of several datasets:
```
Server error while post-processing the split rows. Please report the issue.
Error code: RowsPostProcessingError
```
- https://huggingface.co/datasets/multi_woz_v22
- https://huggingface.co/datasets/qasper | closed | 2022-10-10T09:12:02Z | 2022-10-11T05:50:22Z | 2022-10-10T22:05:15Z | albertvillanova |
1,397,439,628 | feat: πΈ change the format of the image cells in /first-rows | Instead of returning a string with the URL, we now return an object with: src: the URL, height and width: the dimensions of the image in pixels.
BREAKING CHANGE: 𧨠the image cell format is now an object {src, height, width} | feat: πΈ change the format of the image cells in /first-rows: Instead of returning a string with the URL, we now return an object with: src: the URL, height and width: the dimensions of the image in pixels.
BREAKING CHANGE: 𧨠the image cell format is now an object {src, height, width} | closed | 2022-10-05T08:35:34Z | 2022-10-05T13:09:06Z | 2022-10-05T13:09:05Z | severo |
1,392,507,817 | Support package imports in datasets scripts | It would be neat if the viewer would work with datasets scripts that have arbitrary package imports (basically, being able to parse the output of `datasets.utils.py_utils.get_imports(dataset_script)` and prepare the env) instead of being constrained to some predefined list of the supported packages. Are there any plans to support that? | Support package imports in datasets scripts: It would be neat if the viewer would work with datasets scripts that have arbitrary package imports (basically, being able to parse the output of `datasets.utils.py_utils.get_imports(dataset_script)` and prepare the env) instead of being constrained to some predefined list of the supported packages. Are there any plans to support that? | closed | 2022-09-30T14:13:03Z | 2024-02-02T16:57:13Z | 2024-02-02T16:57:13Z | mariosasko |
1,392,408,266 | feat: πΈ add a query on the features of the datasets | null | feat: πΈ add a query on the features of the datasets: | closed | 2022-09-30T13:00:49Z | 2022-09-30T13:56:38Z | 2022-09-30T13:56:38Z | severo |
1,390,963,754 | Add section for macos | null | Add section for macos: | closed | 2022-09-29T14:21:36Z | 2022-09-29T14:21:47Z | 2022-09-29T14:21:47Z | severo |
1,390,864,633 | docs: βοΈ add sections | null | docs: βοΈ add sections: | closed | 2022-09-29T13:20:47Z | 2022-10-07T09:27:49Z | 2022-10-07T09:27:48Z | severo |
1,390,431,486 | ci: push the images to Docker Hub in the public organization hf | null | ci: push the images to Docker Hub in the public organization hf: | closed | 2022-09-29T08:01:11Z | 2022-09-29T08:44:08Z | 2022-09-29T08:41:12Z | severo |
1,390,425,227 | fix: π fix the dependencies for macos m1/m2 (#593) | * fix: π fix the dependencies for macos m1/m2
Tensorflow does not support macos m1/m2 architectures, thus for these platforms we opt for installing tensorflow-mac instead, which is maintained by Apple. Note that it supports Macos intel too.
* chore: π€ update safety to fix vulnerability with dparse
funny that the vulnerability scanner was the one that introduced a vulnerability | fix: π fix the dependencies for macos m1/m2 (#593): * fix: π fix the dependencies for macos m1/m2
Tensorflow does not support macos m1/m2 architectures, thus for these platforms we opt for installing tensorflow-mac instead, which is maintained by Apple. Note that it supports Macos intel too.
* chore: π€ update safety to fix vulnerability with dparse
funny that the vulnerability scanner was the one that introduced a vulnerability | closed | 2022-09-29T07:56:05Z | 2022-09-29T07:59:02Z | 2022-09-29T07:58:58Z | severo |
1,389,571,595 | fix: π fix the dependencies for macos m1/m2 | Tensorflow does not support macos m1/m2 architectures, thus for these platforms we opt for installing tensorflow-mac instead, which is maintained by Apple. Note that it supports Macos intel too. | fix: π fix the dependencies for macos m1/m2: Tensorflow does not support macos m1/m2 architectures, thus for these platforms we opt for installing tensorflow-mac instead, which is maintained by Apple. Note that it supports Macos intel too. | closed | 2022-09-28T15:36:04Z | 2022-09-28T16:21:02Z | 2022-09-28T16:21:02Z | severo |
1,387,628,556 | 587 fix list of images or audio | null | 587 fix list of images or audio: | closed | 2022-09-27T11:52:44Z | 2022-09-27T14:02:19Z | 2022-09-27T14:02:18Z | severo |
1,387,348,482 | fix: π restore the check on the webhook payload | The upstream issue has been fixed on the Hub. It's important to restore the test, because it avoids collision between datasets and models (models don't have a prefix in the webhook v1). | fix: π restore the check on the webhook payload: The upstream issue has been fixed on the Hub. It's important to restore the test, because it avoids collision between datasets and models (models don't have a prefix in the webhook v1). | closed | 2022-09-27T08:32:05Z | 2022-09-27T08:52:49Z | 2022-09-27T08:52:49Z | severo |
1,386,511,229 | tensorflow-io-gcs-filesystem build issue on M1 | Wondering if anyone is building and developing this on an M1/2 Machine. At the moment I am running into the following issue. `tensorflow-io-gcs-filesystem` is not currently available on macOS with the M1. Normally this wouldnt be a problem as you can install it using the following:
```bash
python setup.py -q bdist_wheel --project tensorflow_io_gcs_filesystem
python -m pip install --no-deps dist/<wheel-file-from-last-step>
```
Only issue poetry first removes all the dependencies it seems before attempting to install everything. Is there a workaround for this that you all have seen? | tensorflow-io-gcs-filesystem build issue on M1: Wondering if anyone is building and developing this on an M1/2 Machine. At the moment I am running into the following issue. `tensorflow-io-gcs-filesystem` is not currently available on macOS with the M1. Normally this wouldnt be a problem as you can install it using the following:
```bash
python setup.py -q bdist_wheel --project tensorflow_io_gcs_filesystem
python -m pip install --no-deps dist/<wheel-file-from-last-step>
```
Only issue poetry first removes all the dependencies it seems before attempting to install everything. Is there a workaround for this that you all have seen? | closed | 2022-09-26T18:13:38Z | 2022-09-29T19:00:20Z | 2022-09-29T07:13:50Z | dtaivpp |
1,385,771,542 | Details | null | Details: | closed | 2022-09-26T09:52:49Z | 2022-09-26T10:19:40Z | 2022-09-26T10:19:39Z | severo |
1,384,774,099 | Dataset Viewer issue for will33am/Caltech101 | ### Link
https://huggingface.co/datasets/will33am/Caltech101
### Description
The dataset viewer is not working for dataset will33am/Caltech101.
Error details:
```
Error code: ClientConnectionError
```
| Dataset Viewer issue for will33am/Caltech101: ### Link
https://huggingface.co/datasets/will33am/Caltech101
### Description
The dataset viewer is not working for dataset will33am/Caltech101.
Error details:
```
Error code: ClientConnectionError
```
| closed | 2022-09-24T18:34:25Z | 2022-09-26T08:58:31Z | 2022-09-26T08:58:14Z | williamberrios |
1,384,690,215 | Dataset Viewer issue for nlphuji/winogavil | ### Link
https://huggingface.co/datasets/nlphuji/winogavil
### Description
The dataset viewer is not working for dataset nlphuji/winogavil.
Error details:
```
Error code: UnexpectedError
Type is not JSON serializable: PngImageFile
```
**TLDR**: Is there an option to ignore a specific dataset field *only* in the Dataset Viewer, and to keep it as the part of the dataset otherwise?
My dataset includes a field named "candidates" which is a list of strings, where each string is a filename reference.
In the process script ([here](https://huggingface.co/datasets/nlphuji/winogavil/blob/main/winogavil.py#L116)) I transform each of the filenames to a png.
This code:
```python
winogavil = load_dataset("nlphuji/winogavil", use_auth_token=auth_token)["test"]
print(winogavil[0]['candidates'])
print(winogavil[0]['candidates_images'])
```
returns the following:
```python
['eagle', 'teepee', 'factory', 'metal', 'ash']
[<PIL.PngImagePlugin.PngImageFile image mode=RGB size=2384x2384 at 0x7F6ABC6AA2D0>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1500x1500 at 0x7F6ABC6AA390>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1400x1400 at 0x7F6ABC6AAA50>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=2400x2400 at 0x7F6ABC6AAA10>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=900x900 at 0x7F6ABC6AA4D0>]
```
This is the intended use for my dataset, and the only gap is that it does not work in the Dataset Viewer.
Ideally, it will show each image name as string, and have an option to "click" it and to show image.
- Do you have a support for a list of images in the Dataset Viewer?
- If not, is there an option just to eliminate the "candidate_images" field ONLY in the viewer, and keep it as a part of the dataset (with load_dataset)?
[This colab](https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi?usp=sharing) shows a usage example for my dataset with huggingface, if that helps.
I managed to reach a solution by defining it as a list of strings rather than images:
```python
#"candidates_images": [img]
"candidates_images": [datasets.Value("string")]
```
It means that this field will hold the strings and the dataset user will need to read it to an image object.
Now `['candidates_images'][0]` is
`'/root/.cache/huggingface/datasets/downloads/extracted/0d79b7f800bd172835f30786bdc7e6b20178375a9b927086b7f1ce0af27d63d4/winogavil_images/eagle.jpg'`
And its possible to read it with Pillow.Image.open
This solution is fine with me, but I am wondering if there is a better solution.
Thanks! | Dataset Viewer issue for nlphuji/winogavil: ### Link
https://huggingface.co/datasets/nlphuji/winogavil
### Description
The dataset viewer is not working for dataset nlphuji/winogavil.
Error details:
```
Error code: UnexpectedError
Type is not JSON serializable: PngImageFile
```
**TLDR**: Is there an option to ignore a specific dataset field *only* in the Dataset Viewer, and to keep it as the part of the dataset otherwise?
My dataset includes a field named "candidates" which is a list of strings, where each string is a filename reference.
In the process script ([here](https://huggingface.co/datasets/nlphuji/winogavil/blob/main/winogavil.py#L116)) I transform each of the filenames to a png.
This code:
```python
winogavil = load_dataset("nlphuji/winogavil", use_auth_token=auth_token)["test"]
print(winogavil[0]['candidates'])
print(winogavil[0]['candidates_images'])
```
returns the following:
```python
['eagle', 'teepee', 'factory', 'metal', 'ash']
[<PIL.PngImagePlugin.PngImageFile image mode=RGB size=2384x2384 at 0x7F6ABC6AA2D0>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1500x1500 at 0x7F6ABC6AA390>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1400x1400 at 0x7F6ABC6AAA50>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=2400x2400 at 0x7F6ABC6AAA10>, <PIL.PngImagePlugin.PngImageFile image mode=RGB size=900x900 at 0x7F6ABC6AA4D0>]
```
This is the intended use for my dataset, and the only gap is that it does not work in the Dataset Viewer.
Ideally, it will show each image name as string, and have an option to "click" it and to show image.
- Do you have a support for a list of images in the Dataset Viewer?
- If not, is there an option just to eliminate the "candidate_images" field ONLY in the viewer, and keep it as a part of the dataset (with load_dataset)?
[This colab](https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi?usp=sharing) shows a usage example for my dataset with huggingface, if that helps.
I managed to reach a solution by defining it as a list of strings rather than images:
```python
#"candidates_images": [img]
"candidates_images": [datasets.Value("string")]
```
It means that this field will hold the strings and the dataset user will need to read it to an image object.
Now `['candidates_images'][0]` is
`'/root/.cache/huggingface/datasets/downloads/extracted/0d79b7f800bd172835f30786bdc7e6b20178375a9b927086b7f1ce0af27d63d4/winogavil_images/eagle.jpg'`
And its possible to read it with Pillow.Image.open
This solution is fine with me, but I am wondering if there is a better solution.
Thanks! | closed | 2022-09-24T14:16:22Z | 2022-12-21T16:07:44Z | 2022-12-21T16:07:43Z | yonatanbitton |
1,383,974,629 | docs: βοΈ improve the onboarding | null | docs: βοΈ improve the onboarding: | closed | 2022-09-23T15:29:14Z | 2022-09-23T15:44:03Z | 2022-09-23T15:44:02Z | severo |
1,383,841,443 | Dataset Viewer issue for GEM/wiki_lingua | ### Link
https://huggingface.co/datasets/GEM/wiki_lingua
### Description
The dataset viewer is not working for dataset GEM/wiki_lingua.
Error details:
```
Error code: StreamingRowsError
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/GEM/wiki_lingua/resolve/main/wikilingua_cleaned.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
ds = load_dataset(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1739, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1025, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
File "/tmp/modules-cache/datasets_modules/datasets/GEM--wiki_lingua/84e1fa083237de0bf0016a1934d8b659ecafd567f398012ca5d702b7acc97450/wiki_lingua.py", line 184, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 944, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 907, in extract
urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 912, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 390, in _get_extraction_protocol
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/GEM/wiki_lingua/resolve/main/wikilingua_cleaned.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
| Dataset Viewer issue for GEM/wiki_lingua: ### Link
https://huggingface.co/datasets/GEM/wiki_lingua
### Description
The dataset viewer is not working for dataset GEM/wiki_lingua.
Error details:
```
Error code: StreamingRowsError
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/GEM/wiki_lingua/resolve/main/wikilingua_cleaned.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
ds = load_dataset(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1739, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1025, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
File "/tmp/modules-cache/datasets_modules/datasets/GEM--wiki_lingua/84e1fa083237de0bf0016a1934d8b659ecafd567f398012ca5d702b7acc97450/wiki_lingua.py", line 184, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 944, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 907, in extract
urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 912, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 390, in _get_extraction_protocol
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/GEM/wiki_lingua/resolve/main/wikilingua_cleaned.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
| closed | 2022-09-23T13:50:57Z | 2022-09-23T13:51:29Z | 2022-09-23T13:51:29Z | severo |
1,383,759,087 | Dataset Viewer issue for severo/empty_public | ### Link
https://huggingface.co/datasets/severo/empty_public
### Description
The dataset viewer is not working for dataset severo/empty_public.
Error details:
```
Error code: EmptyDatasetError
Exception: EmptyDatasetError
Message: The dataset repository at 'severo/empty_public' doesn't contain any data files
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/splits.py", line 79, in get_splits_response
split_full_names = get_dataset_split_full_names(dataset, hf_token)
File "/src/services/worker/src/worker/responses/splits.py", line 41, in get_dataset_split_full_names
for config in get_dataset_config_names(dataset, use_auth_token=hf_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 308, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1171, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1156, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 760, in get_module
else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 676, in get_data_patterns_in_dataset_repository
raise EmptyDatasetError(
datasets.data_files.EmptyDatasetError: The dataset repository at 'severo/empty_public' doesn't contain any data files
```
cc @albertvillanova @lhoestq @severo. | Dataset Viewer issue for severo/empty_public: ### Link
https://huggingface.co/datasets/severo/empty_public
### Description
The dataset viewer is not working for dataset severo/empty_public.
Error details:
```
Error code: EmptyDatasetError
Exception: EmptyDatasetError
Message: The dataset repository at 'severo/empty_public' doesn't contain any data files
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/splits.py", line 79, in get_splits_response
split_full_names = get_dataset_split_full_names(dataset, hf_token)
File "/src/services/worker/src/worker/responses/splits.py", line 41, in get_dataset_split_full_names
for config in get_dataset_config_names(dataset, use_auth_token=hf_token)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 308, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1171, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1156, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 760, in get_module
else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 676, in get_data_patterns_in_dataset_repository
raise EmptyDatasetError(
datasets.data_files.EmptyDatasetError: The dataset repository at 'severo/empty_public' doesn't contain any data files
```
cc @albertvillanova @lhoestq @severo. | closed | 2022-09-23T12:50:25Z | 2022-09-23T13:06:13Z | 2022-09-23T13:06:13Z | severo |
1,382,333,297 | Simplify code snippet in docs | We can use directly the method:
```python
response.json()
```
because the inferred encoding from the headers is the right one:
```python
In [6]: response.encoding
Out[6]: 'utf-8'
``` | Simplify code snippet in docs: We can use directly the method:
```python
response.json()
```
because the inferred encoding from the headers is the right one:
```python
In [6]: response.encoding
Out[6]: 'utf-8'
``` | closed | 2022-09-22T12:02:13Z | 2022-09-23T12:02:03Z | 2022-09-23T12:02:03Z | albertvillanova |
1,382,223,911 | Fix private to public | See https://github.com/huggingface/datasets-server/issues/380#issuecomment-1254740105:
> private -> public does not generate a webhook, it seems like a bug on the Hub. (btw: turning a public repo to private generates a "update" webhook, conversely)
and
https://github.com/huggingface/moon-landing/issues/2362#issuecomment-1254774183
> Meanwhile, in datasets-server, I will fix the issue by using the request (that generates the SplitsResponseNotFound error) to trigger a cache refresh if needed, so: not urgent
In this PR, if a response to /splits or /first-rows is NotFound but should have existed, ie. is a cache miss, we add a job to the queue, and return a NotReady error instead. | Fix private to public: See https://github.com/huggingface/datasets-server/issues/380#issuecomment-1254740105:
> private -> public does not generate a webhook, it seems like a bug on the Hub. (btw: turning a public repo to private generates a "update" webhook, conversely)
and
https://github.com/huggingface/moon-landing/issues/2362#issuecomment-1254774183
> Meanwhile, in datasets-server, I will fix the issue by using the request (that generates the SplitsResponseNotFound error) to trigger a cache refresh if needed, so: not urgent
In this PR, if a response to /splits or /first-rows is NotFound but should have existed, ie. is a cache miss, we add a job to the queue, and return a NotReady error instead. | closed | 2022-09-22T10:35:11Z | 2022-09-23T11:47:55Z | 2022-09-23T11:47:54Z | severo |
1,382,070,756 | Hot fix webhook v1 | null | Hot fix webhook v1: | closed | 2022-09-22T08:45:30Z | 2022-09-22T08:55:30Z | 2022-09-22T08:55:29Z | severo |
1,381,131,974 | feat: πΈ upgrade datasets to 2.5.1 | also remove the exceptions for safety | feat: πΈ upgrade datasets to 2.5.1: also remove the exceptions for safety | closed | 2022-09-21T15:34:12Z | 2022-09-21T16:36:23Z | 2022-09-21T16:36:22Z | severo |
1,380,798,968 | Use json logs in nginx | null | Use json logs in nginx: | closed | 2022-09-21T11:48:38Z | 2022-09-21T12:13:30Z | 2022-09-21T12:13:29Z | severo |
1,379,950,972 | Fix dependency vulnerabilities | null | Fix dependency vulnerabilities: | closed | 2022-09-20T20:22:19Z | 2022-09-20T20:49:34Z | 2022-09-20T20:49:33Z | severo |
1,379,445,320 | refactor: π‘ remove dead code and TODO comments | the info, if relevant, has been added to issues or internal notes. | refactor: π‘ remove dead code and TODO comments: the info, if relevant, has been added to issues or internal notes. | closed | 2022-09-20T13:41:07Z | 2022-09-20T13:56:57Z | 2022-09-20T13:56:56Z | severo |
1,379,384,561 | docs: βοΈ fix the docs to only use datasets server, not ds api | null | docs: βοΈ fix the docs to only use datasets server, not ds api: | closed | 2022-09-20T13:00:26Z | 2022-09-20T13:03:17Z | 2022-09-20T13:00:38Z | severo |
1,378,586,354 | refactor: π‘ remove unused value | null | refactor: π‘ remove unused value: | closed | 2022-09-19T22:15:50Z | 2022-09-19T22:20:36Z | 2022-09-19T22:20:35Z | severo |
1,378,548,833 | chore: π€ add an issue template | null | chore: π€ add an issue template: | closed | 2022-09-19T21:33:01Z | 2022-09-19T21:33:07Z | 2022-09-19T21:33:06Z | severo |
1,378,507,850 | feat: πΈ remove support for .env files | we support docker and helm/kubernetes: pass the environment variables, or use the default values. | feat: πΈ remove support for .env files: we support docker and helm/kubernetes: pass the environment variables, or use the default values. | closed | 2022-09-19T20:51:52Z | 2022-09-20T09:17:39Z | 2022-09-20T09:17:38Z | severo |
1,378,463,664 | chore: π€ add license and other files before going opensource | See https://spdx.dev/ids/ and https://developer.blender.org/T95597 | chore: π€ add license and other files before going opensource: See https://spdx.dev/ids/ and https://developer.blender.org/T95597 | closed | 2022-09-19T20:05:49Z | 2022-09-20T12:44:33Z | 2022-09-20T12:44:33Z | severo |
1,378,223,297 | docs: βοΈ update and simplify the README/INSTALL/CONTRIBUTING doc | null | docs: βοΈ update and simplify the README/INSTALL/CONTRIBUTING doc: | closed | 2022-09-19T16:33:15Z | 2022-09-20T09:42:01Z | 2022-09-19T16:43:05Z | severo |
1,376,212,695 | feat: πΈ don't close issues with tag "keep" | Β―\\\_(γ)\_/Β― | feat: πΈ don't close issues with tag "keep": Β―\\\_(γ)\_/Β― | closed | 2022-09-16T17:20:17Z | 2022-09-16T17:20:46Z | 2022-09-16T17:20:45Z | severo |
1,375,926,886 | Improve the docs | Add more sections in the docs of datasets-server:
> What about having several top-level sections e.g.
>
> - First "Quickstart" that shows the main usage of the API
> - Then "How to guides" to go further into details ("Check a dataset validity", "List configurations and splits" and "Preview the data")
> - Finally a "Conceptual Guide" to explain the relationship between configs and splits, the typing logic etc.
https://github.com/huggingface/datasets-server/pull/566#issuecomment-1249138252
Also, insert references to the datasets-server inside the datasets library docs (see the first try here: https://github.com/huggingface/datasets/pull/4984)
We could work on it together, @stevhliu? | Improve the docs: Add more sections in the docs of datasets-server:
> What about having several top-level sections e.g.
>
> - First "Quickstart" that shows the main usage of the API
> - Then "How to guides" to go further into details ("Check a dataset validity", "List configurations and splits" and "Preview the data")
> - Finally a "Conceptual Guide" to explain the relationship between configs and splits, the typing logic etc.
https://github.com/huggingface/datasets-server/pull/566#issuecomment-1249138252
Also, insert references to the datasets-server inside the datasets library docs (see the first try here: https://github.com/huggingface/datasets/pull/4984)
We could work on it together, @stevhliu? | closed | 2022-09-16T13:07:18Z | 2022-09-19T08:46:47Z | 2022-09-19T08:46:47Z | severo |
1,374,702,859 | Test the doc snippets | See https://github.com/huggingface/api-inference/blob/main/static/tests/documentation/test_inference.py for example | Test the doc snippets: See https://github.com/huggingface/api-inference/blob/main/static/tests/documentation/test_inference.py for example | closed | 2022-09-15T15:21:16Z | 2022-09-19T08:47:38Z | 2022-09-19T08:47:38Z | severo |
1,374,569,150 | rework doc | fixes #562 | rework doc: fixes #562 | closed | 2022-09-15T13:53:47Z | 2022-09-22T15:55:43Z | 2022-09-16T13:02:04Z | severo |
1,374,508,606 | chore: π€ add a stale bot | fixes #564 | chore: π€ add a stale bot: fixes #564 | closed | 2022-09-15T13:10:59Z | 2022-09-15T17:32:49Z | 2022-09-15T17:32:48Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.