id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
⌀ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,676,751,430 | Propagate previous step error | - ensure that all the job runners will propagate the error from a previous step, if it's the case. This will help the users of the dataset viewer: they will see the original error (e.g.: the dataset is empty), instead of weird applications about steps
Also, refactor: remove special cases by making PreviousJobError a CustomError, as the other exceptions | Propagate previous step error: - ensure that all the job runners will propagate the error from a previous step, if it's the case. This will help the users of the dataset viewer: they will see the original error (e.g.: the dataset is empty), instead of weird applications about steps
Also, refactor: remove special cases by making PreviousJobError a CustomError, as the other exceptions | closed | 2023-04-20T13:40:33Z | 2023-04-20T15:29:11Z | 2023-04-20T15:26:05Z | severo |
1,676,506,799 | fix: 🐛 add missing "config" parameter | when fetching the previous step (config-info) | fix: 🐛 add missing "config" parameter: when fetching the previous step (config-info) | closed | 2023-04-20T11:06:34Z | 2023-04-24T12:31:10Z | 2023-04-20T11:30:42Z | severo |
1,675,245,630 | Adding ttls time for metrics cron job | null | Adding ttls time for metrics cron job : | closed | 2023-04-19T16:29:55Z | 2023-04-19T16:47:55Z | 2023-04-19T16:44:52Z | AndreaFrancis |
1,675,199,770 | Add missing cache config | null | Add missing cache config: | closed | 2023-04-19T15:59:15Z | 2023-04-19T16:10:23Z | 2023-04-19T16:07:10Z | AndreaFrancis |
1,675,006,823 | fix: cron job definition | null | fix: cron job definition: | closed | 2023-04-19T14:20:15Z | 2023-04-19T15:48:23Z | 2023-04-19T15:45:16Z | rtrompier |
1,674,900,633 | feat: 🎸 add backfill tasks, and button to launch backfill | See it in action. Here we see than a repo that does not exists shows an error (we could improve it), and then when the repo is created (but empty), the cache entries are created. Note that currently, the setup is a bit weird:
- the local hub does not send webhooks to the local datasets-server
- but: as the local hub has tried to get the list of first rows from the datasets-server, it has triggered an update of the dataset (see how service/api works when a cache entry is missing)
- thus: some cache entries have been created
- BUT: it was done using the "job runner" logic, where every step creates the following steps after the job has been finished. This logic has some flaws, which is why the backfill plan shows some missing tasks: some other jobs should have been created
https://user-images.githubusercontent.com/1676121/233086733-133866c0-bcaf-4f62-903b-9bb7a5c29f40.mov
Here we show that, once we add files to the empty repository, the backfill plan is to refresh a lot of cache entries. We execute the backfill, and then the dataset viewer works.
https://user-images.githubusercontent.com/1676121/233087075-d8aeeb11-817f-4bae-b412-ea792e32796a.mov
| feat: 🎸 add backfill tasks, and button to launch backfill: See it in action. Here we see than a repo that does not exists shows an error (we could improve it), and then when the repo is created (but empty), the cache entries are created. Note that currently, the setup is a bit weird:
- the local hub does not send webhooks to the local datasets-server
- but: as the local hub has tried to get the list of first rows from the datasets-server, it has triggered an update of the dataset (see how service/api works when a cache entry is missing)
- thus: some cache entries have been created
- BUT: it was done using the "job runner" logic, where every step creates the following steps after the job has been finished. This logic has some flaws, which is why the backfill plan shows some missing tasks: some other jobs should have been created
https://user-images.githubusercontent.com/1676121/233086733-133866c0-bcaf-4f62-903b-9bb7a5c29f40.mov
Here we show that, once we add files to the empty repository, the backfill plan is to refresh a lot of cache entries. We execute the backfill, and then the dataset viewer works.
https://user-images.githubusercontent.com/1676121/233087075-d8aeeb11-817f-4bae-b412-ea792e32796a.mov
| closed | 2023-04-19T13:25:11Z | 2023-04-19T13:47:10Z | 2023-04-19T13:44:17Z | severo |
1,674,710,695 | Handle fan in steps | Now, the fan-in steps are refreshed when one of the parent artifacts is updated (ie. if a config-parquet entry is computed, the dataset-parquet artifact is marked for refresh). Before, they were not updated in that case, which was a bug. | Handle fan in steps: Now, the fan-in steps are refreshed when one of the parent artifacts is updated (ie. if a config-parquet entry is computed, the dataset-parquet artifact is marked for refresh). Before, they were not updated in that case, which was a bug. | closed | 2023-04-19T11:27:14Z | 2023-04-19T13:47:34Z | 2023-04-19T13:43:51Z | severo |
1,673,550,428 | fix: 🐛 remove wrong filtering of the step parents | Even if a parent is an ancestor of another parent, we want to get it in the list. | fix: 🐛 remove wrong filtering of the step parents: Even if a parent is an ancestor of another parent, we want to get it in the list. | closed | 2023-04-18T17:24:11Z | 2023-04-19T13:33:07Z | 2023-04-19T13:29:30Z | severo |
1,673,337,446 | Change restart policy for metrics collector cron job | null | Change restart policy for metrics collector cron job: | closed | 2023-04-18T15:09:38Z | 2023-04-18T18:22:01Z | 2023-04-18T18:18:27Z | AndreaFrancis |
1,673,292,456 | Fix container for metrics collector | null | Fix container for metrics collector: | closed | 2023-04-18T14:47:37Z | 2023-04-18T15:02:45Z | 2023-04-18T14:54:04Z | AndreaFrancis |
1,673,119,141 | Fix dataset level aggregators | copy the error if the previous step is erroneous, instead of errors that cannot be understood by the user, like https://github.com/huggingface/datasets-server/issues/1055. That way, an error (like 'empty dataset' for example) is propagated to the descendants.
---
Before, on an empty dataset repository, the dataset-split-names, for example, returned:
<img width="671" alt="Capture d’écran 2023-04-18 à 16 47 01" src="https://user-images.githubusercontent.com/1676121/232814116-bbf9ae08-c4fa-4b3c-ae3d-1330a0d4ba5b.png">
Now:
<img width="1592" alt="Capture d’écran 2023-04-18 à 16 45 05" src="https://user-images.githubusercontent.com/1676121/232813719-67b8e8c4-89be-4d50-9607-02d1aab0843c.png">
| Fix dataset level aggregators: copy the error if the previous step is erroneous, instead of errors that cannot be understood by the user, like https://github.com/huggingface/datasets-server/issues/1055. That way, an error (like 'empty dataset' for example) is propagated to the descendants.
---
Before, on an empty dataset repository, the dataset-split-names, for example, returned:
<img width="671" alt="Capture d’écran 2023-04-18 à 16 47 01" src="https://user-images.githubusercontent.com/1676121/232814116-bbf9ae08-c4fa-4b3c-ae3d-1330a0d4ba5b.png">
Now:
<img width="1592" alt="Capture d’écran 2023-04-18 à 16 45 05" src="https://user-images.githubusercontent.com/1676121/232813719-67b8e8c4-89be-4d50-9607-02d1aab0843c.png">
| closed | 2023-04-18T13:17:06Z | 2023-04-19T07:31:26Z | 2023-04-19T07:28:29Z | severo |
1,673,043,556 | Dataset Viewer issue for togethercomputer/RedPajama-Data-1T | ### Link
https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
### Description
The dataset viewer is not working for dataset togethercomputer/RedPajama-Data-1T.
Error details:
```
Error code: PreviousStepFormatError
```
| Dataset Viewer issue for togethercomputer/RedPajama-Data-1T: ### Link
https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
### Description
The dataset viewer is not working for dataset togethercomputer/RedPajama-Data-1T.
Error details:
```
Error code: PreviousStepFormatError
```
| closed | 2023-04-18T12:39:34Z | 2023-05-10T05:25:36Z | 2023-05-10T05:25:36Z | carlosseda |
1,672,998,455 | Dataset Viewer issue for jiacheng-ye/logiqa-zh | ### Link
https://huggingface.co/datasets/jiacheng-ye/logiqa-zh
### Description
The dataset viewer is not working for dataset jiacheng-ye/logiqa-zh.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for jiacheng-ye/logiqa-zh: ### Link
https://huggingface.co/datasets/jiacheng-ye/logiqa-zh
### Description
The dataset viewer is not working for dataset jiacheng-ye/logiqa-zh.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-18T12:16:37Z | 2023-04-19T07:42:45Z | 2023-04-19T07:42:44Z | jiacheng-ye |
1,671,407,294 | Avoid disclosing private datasets | Change the error message when a dataset is private. Note that normally this error message was never shown in the API, but, it's better to reduce the risk of it being misused anyway
Also: return a specific error when the revision does not exist, but the dataset exists | Avoid disclosing private datasets: Change the error message when a dataset is private. Note that normally this error message was never shown in the API, but, it's better to reduce the risk of it being misused anyway
Also: return a specific error when the revision does not exist, but the dataset exists | closed | 2023-04-17T15:08:11Z | 2023-04-17T15:57:35Z | 2023-04-17T15:54:34Z | severo |
1,671,224,608 | chore: 🤖 add DEV_NETWORK_MODE and DEV_MONGO_HOST | On Ubuntu, set DEV_NETWORK_MODE=host and DEV_MONGO_HOST=localhost. On Mac, let them unset.
| chore: 🤖 add DEV_NETWORK_MODE and DEV_MONGO_HOST: On Ubuntu, set DEV_NETWORK_MODE=host and DEV_MONGO_HOST=localhost. On Mac, let them unset.
| closed | 2023-04-17T13:48:12Z | 2023-04-17T15:57:18Z | 2023-04-17T15:54:23Z | severo |
1,670,960,249 | Dataset Viewer issue for austint73/butterfly_dataset | ### Link
https://huggingface.co/datasets/austint73/butterfly_dataset
### Description
The dataset viewer is not working for dataset austint73/butterfly_dataset.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for austint73/butterfly_dataset: ### Link
https://huggingface.co/datasets/austint73/butterfly_dataset
### Description
The dataset viewer is not working for dataset austint73/butterfly_dataset.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-17T11:24:10Z | 2023-07-25T09:10:01Z | 2023-04-18T06:19:14Z | jS5t3r |
1,670,217,654 | Dataset Viewer issue for Uberg/UbergsAquaticPlantsDataset | ### Link
https://huggingface.co/datasets/Uberg/UbergsAquaticPlantsDataset
### Description
The dataset viewer is not working for dataset Uberg/UbergsAquaticPlantsDataset.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| Dataset Viewer issue for Uberg/UbergsAquaticPlantsDataset: ### Link
https://huggingface.co/datasets/Uberg/UbergsAquaticPlantsDataset
### Description
The dataset viewer is not working for dataset Uberg/UbergsAquaticPlantsDataset.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| closed | 2023-04-17T00:34:30Z | 2023-04-18T06:21:17Z | 2023-04-18T06:21:17Z | Casavantii |
1,669,789,785 | Dataset Viewer issue for banking77 | ### Link
https://huggingface.co/datasets/banking77
### Description
The dataset viewer is not working for dataset banking77.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for banking77: ### Link
https://huggingface.co/datasets/banking77
### Description
The dataset viewer is not working for dataset banking77.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-16T08:51:46Z | 2023-04-17T13:58:48Z | 2023-04-17T13:58:47Z | aman-tandon-30 |
1,664,902,158 | fix: 🐛 add missing environment variable | null | fix: 🐛 add missing environment variable: | closed | 2023-04-12T16:46:11Z | 2023-04-12T16:50:06Z | 2023-04-12T16:47:17Z | severo |
1,664,694,924 | Dataset Viewer issue for dominguesm/Canarim-Instruct-PTBR-Dataset | ### Link
https://huggingface.co/datasets/dominguesm/Canarim-Instruct-PTBR-Dataset
### Description
The dataset viewer is not working for dataset dominguesm/Canarim-Instruct-PTBR-Dataset.
Error details:
```
Error code: PreviousStepFormatError
```
| Dataset Viewer issue for dominguesm/Canarim-Instruct-PTBR-Dataset: ### Link
https://huggingface.co/datasets/dominguesm/Canarim-Instruct-PTBR-Dataset
### Description
The dataset viewer is not working for dataset dominguesm/Canarim-Instruct-PTBR-Dataset.
Error details:
```
Error code: PreviousStepFormatError
```
| closed | 2023-04-12T14:35:36Z | 2023-04-13T09:15:51Z | 2023-04-13T09:15:51Z | DominguesM |
1,663,342,079 | Separate computation metrics for jobs and cache in a cron job | This is a proposal for https://github.com/huggingface/datasets-server/issues/973
1. Store metrics in db
2. Get latest record in metric collection and send it to prometheus instead of calculate again
Pending:
- [x] Unittest for new action collect-metrics
- [x] Fix e2e tests for admin metrics
- [x] Make cron schedule parameterized (It is hardcoded) | Separate computation metrics for jobs and cache in a cron job: This is a proposal for https://github.com/huggingface/datasets-server/issues/973
1. Store metrics in db
2. Get latest record in metric collection and send it to prometheus instead of calculate again
Pending:
- [x] Unittest for new action collect-metrics
- [x] Fix e2e tests for admin metrics
- [x] Make cron schedule parameterized (It is hardcoded) | closed | 2023-04-11T23:10:19Z | 2023-04-17T11:54:28Z | 2023-04-17T11:51:30Z | AndreaFrancis |
1,663,254,511 | feat: 🎸 get the dataset state, and backfill the missing parts | Creates two new endpoints:
- `GET /admin/dataset-state?dataset=[dataset]`: get the current state of the dataset (queue and cache)
- `POST /admin/dataset-backfill?dataset=[dataset]`: create (and delete) the jobs required to fix a dataset if necessary
They rely on the new class `DatasetState`. Please review the details of the implementation.
---
Here is a screencast of the two endpoints.
https://user-images.githubusercontent.com/1676121/232551041-4a12f9b9-490d-4d4b-b63a-ad7dbfe68e4b.mov
---
Not to be discussed here: as a follow-up, I think that most of the logic of the queue and DAG should go through that class, within a small orchestrator, instead of through the job runners.
- currently: when a job runner finishes a job, it tries to create the following jobs in the DAG order.
- proposal: when a job runner finishes a job, it sends the result to the orchestrator, that computes the dataset state and launches the missing jobs. | feat: 🎸 get the dataset state, and backfill the missing parts: Creates two new endpoints:
- `GET /admin/dataset-state?dataset=[dataset]`: get the current state of the dataset (queue and cache)
- `POST /admin/dataset-backfill?dataset=[dataset]`: create (and delete) the jobs required to fix a dataset if necessary
They rely on the new class `DatasetState`. Please review the details of the implementation.
---
Here is a screencast of the two endpoints.
https://user-images.githubusercontent.com/1676121/232551041-4a12f9b9-490d-4d4b-b63a-ad7dbfe68e4b.mov
---
Not to be discussed here: as a follow-up, I think that most of the logic of the queue and DAG should go through that class, within a small orchestrator, instead of through the job runners.
- currently: when a job runner finishes a job, it tries to create the following jobs in the DAG order.
- proposal: when a job runner finishes a job, it sends the result to the orchestrator, that computes the dataset state and launches the missing jobs. | closed | 2023-04-11T21:38:24Z | 2023-04-18T08:26:34Z | 2023-04-18T08:23:32Z | severo |
1,662,854,321 | Opt-in / opt-out URLs scan with Spawning | Scan the datasets for URLs and get the number of opt-in / opt-out URLs from artists from Spawning.
Right now it's limited to the first 100K rows (runs in a few seconds) but we can have subsequent jobs to run it on the full datasets.
The rows are obtained from streaming in order to support LAION-2B - but later we could just do it on the parquet exports | Opt-in / opt-out URLs scan with Spawning: Scan the datasets for URLs and get the number of opt-in / opt-out URLs from artists from Spawning.
Right now it's limited to the first 100K rows (runs in a few seconds) but we can have subsequent jobs to run it on the full datasets.
The rows are obtained from streaming in order to support LAION-2B - but later we could just do it on the parquet exports | closed | 2023-04-11T16:25:38Z | 2023-04-21T13:08:38Z | 2023-04-21T13:05:44Z | lhoestq |
1,662,801,597 | Remove /parquet-and-dataset-info job runner | Last step for https://github.com/huggingface/datasets-server/issues/735 and https://github.com/huggingface/datasets-server/issues/866
Now when we have config-level job runner `config-parquet-and-info`, we can drop dataset-level job runner as we don't need aggregation here.
TODO:
- [x] migrations (remove from cache and job databases) | Remove /parquet-and-dataset-info job runner: Last step for https://github.com/huggingface/datasets-server/issues/735 and https://github.com/huggingface/datasets-server/issues/866
Now when we have config-level job runner `config-parquet-and-info`, we can drop dataset-level job runner as we don't need aggregation here.
TODO:
- [x] migrations (remove from cache and job databases) | closed | 2023-04-11T15:55:07Z | 2023-04-27T11:58:20Z | 2023-04-26T11:24:09Z | polinaeterna |
1,661,965,281 | Fix parquet directories when config name is not a valid directory name | See https://github.com/huggingface/datasets-server/pull/985#issuecomment-1502148432 and comments above (https://github.com/huggingface/datasets-server/pull/985#discussion_r1160657830, https://github.com/huggingface/datasets-server/pull/985#discussion_r1160661131) | Fix parquet directories when config name is not a valid directory name: See https://github.com/huggingface/datasets-server/pull/985#issuecomment-1502148432 and comments above (https://github.com/huggingface/datasets-server/pull/985#discussion_r1160657830, https://github.com/huggingface/datasets-server/pull/985#discussion_r1160661131) | open | 2023-04-11T08:00:33Z | 2024-06-19T16:10:52Z | null | severo |
1,661,015,118 | Adding dataset, config and split in dataset-status | Closes https://github.com/huggingface/datasets-server/issues/948 | Adding dataset, config and split in dataset-status: Closes https://github.com/huggingface/datasets-server/issues/948 | closed | 2023-04-10T16:11:16Z | 2023-04-11T11:47:45Z | 2023-04-11T11:44:48Z | AndreaFrancis |
1,660,948,985 | Dataset status in admin ui | Adding new tab in admin-ui to get dataset status
Closes https://github.com/huggingface/datasets-server/issues/822
Depends on https://github.com/huggingface/datasets-server/pull/1041


| Dataset status in admin ui: Adding new tab in admin-ui to get dataset status
Closes https://github.com/huggingface/datasets-server/issues/822
Depends on https://github.com/huggingface/datasets-server/pull/1041


| closed | 2023-04-10T15:25:52Z | 2023-04-11T14:28:26Z | 2023-04-11T11:47:24Z | AndreaFrancis |
1,660,815,506 | Update `huggingface_hub` version to 0.13 | Needed to use new `list_repo_commits()` function, see https://github.com/huggingface/datasets-server/pull/985#discussion_r1160664348
also `huggingface_hub` imports are ignored by mypy now because `py.typed` file was removed in 0.13.0 release (https://github.com/huggingface/huggingface_hub/pull/1329) | Update `huggingface_hub` version to 0.13 : Needed to use new `list_repo_commits()` function, see https://github.com/huggingface/datasets-server/pull/985#discussion_r1160664348
also `huggingface_hub` imports are ignored by mypy now because `py.typed` file was removed in 0.13.0 release (https://github.com/huggingface/huggingface_hub/pull/1329) | closed | 2023-04-10T13:54:08Z | 2023-04-10T16:07:43Z | 2023-04-10T16:04:48Z | polinaeterna |
1,659,644,259 | Dataset Viewer issue for Inspire-art/final | ### Link
https://huggingface.co/datasets/Inspire-art/final
### Description
The dataset viewer is not working for dataset Inspire-art/final.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for Inspire-art/final: ### Link
https://huggingface.co/datasets/Inspire-art/final
### Description
The dataset viewer is not working for dataset Inspire-art/final.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-08T20:58:40Z | 2023-04-12T12:34:18Z | 2023-04-12T07:15:42Z | AMEERAZAM08 |
1,659,316,491 | Dataset Viewer issue for Tylersuard/PathfinderX2 | ### Link
https://huggingface.co/datasets/Tylersuard/PathfinderX2
### Description
The dataset viewer is not working for dataset Tylersuard/PathfinderX2.
Error details: I uploaded a big .zip file. I would like to be able to show a preview of some of the images.
```
Error code: PreviousStepFormatError
```
| Dataset Viewer issue for Tylersuard/PathfinderX2: ### Link
https://huggingface.co/datasets/Tylersuard/PathfinderX2
### Description
The dataset viewer is not working for dataset Tylersuard/PathfinderX2.
Error details: I uploaded a big .zip file. I would like to be able to show a preview of some of the images.
```
Error code: PreviousStepFormatError
```
| closed | 2023-04-08T00:46:59Z | 2023-04-12T12:58:44Z | 2023-04-12T12:58:43Z | Tylersuard |
1,658,711,054 | fix: 🐛 reduce the k8s job TTL to 5 minutes | It is required to allow helm uninstall quickly if necessary.
✅ Closes: https://github.com/huggingface/datasets-server/issues/1035 | fix: 🐛 reduce the k8s job TTL to 5 minutes: It is required to allow helm uninstall quickly if necessary.
✅ Closes: https://github.com/huggingface/datasets-server/issues/1035 | closed | 2023-04-07T11:35:32Z | 2023-04-07T12:30:23Z | 2023-04-07T12:27:33Z | severo |
1,658,709,614 | Harmonize the TTL of k8s jobs after being deleted | See https://github.com/huggingface/datasets-server/pull/788: it added a TTL of 5 minutes, in order not to block an uninstall.
But we recently added a new job (https://github.com/huggingface/datasets-server/pull/1017) with a TTL of 24h.
We put 24h because we wanted to be able to inspect the logs if necessary.
I understand it's not the way to do: we should instead access the logs in elasticsearch. We should reduce the TTL to 300s | Harmonize the TTL of k8s jobs after being deleted: See https://github.com/huggingface/datasets-server/pull/788: it added a TTL of 5 minutes, in order not to block an uninstall.
But we recently added a new job (https://github.com/huggingface/datasets-server/pull/1017) with a TTL of 24h.
We put 24h because we wanted to be able to inspect the logs if necessary.
I understand it's not the way to do: we should instead access the logs in elasticsearch. We should reduce the TTL to 300s | closed | 2023-04-07T11:33:35Z | 2023-04-07T12:27:34Z | 2023-04-07T12:27:34Z | severo |
1,658,703,432 | fix: 🐛 enable the two last migrations | It was missing in
https://github.com/huggingface/datasets-server/pull/1033
FYI @albertvillanova | fix: 🐛 enable the two last migrations: It was missing in
https://github.com/huggingface/datasets-server/pull/1033
FYI @albertvillanova | closed | 2023-04-07T11:25:28Z | 2023-04-07T12:45:55Z | 2023-04-07T11:28:58Z | severo |
1,658,488,895 | Remove /splits | The /splits processing step is redundant and uses many resources in vain. It's time to delete it:
- from the code (the workers will not be able to process these jobs, the API will not use these cache entries to serve /splits)
- from the jobs database
- from the cache database | Remove /splits: The /splits processing step is redundant and uses many resources in vain. It's time to delete it:
- from the code (the workers will not be able to process these jobs, the API will not use these cache entries to serve /splits)
- from the jobs database
- from the cache database | closed | 2023-04-07T07:43:03Z | 2023-04-07T09:29:31Z | 2023-04-07T09:26:48Z | severo |
1,657,960,939 | Create step dataset-is-valid | See https://github.com/huggingface/datasets-server/issues/891 | Create step dataset-is-valid: See https://github.com/huggingface/datasets-server/issues/891 | closed | 2023-04-06T19:54:24Z | 2023-04-07T12:30:00Z | 2023-04-07T12:27:02Z | severo |
1,657,874,362 | Increase backgill resources and adding logs | Increase resources for cache_maintenance job - backfill action.
Adding logs each 100 datasets in backfill | Increase backgill resources and adding logs: Increase resources for cache_maintenance job - backfill action.
Adding logs each 100 datasets in backfill | closed | 2023-04-06T18:32:19Z | 2023-04-06T18:43:27Z | 2023-04-06T18:40:10Z | AndreaFrancis |
1,657,744,658 | Dataset Viewer issue for THUIR/T2Ranking | ### Link
https://huggingface.co/datasets/THUIR/T2Ranking
### Description
The dataset viewer is not working for dataset THUIR/T2Ranking.
Error details:
```
Error code: JobRunnerCrashedError
```
| Dataset Viewer issue for THUIR/T2Ranking: ### Link
https://huggingface.co/datasets/THUIR/T2Ranking
### Description
The dataset viewer is not working for dataset THUIR/T2Ranking.
Error details:
```
Error code: JobRunnerCrashedError
```
| closed | 2023-04-06T16:46:07Z | 2023-04-07T06:41:46Z | 2023-04-07T06:41:46Z | Deriq-Qian-Dong |
1,657,719,071 | Moving cache_refresh job to upgrade maintenance action | Depends on https://github.com/huggingface/datasets-server/pull/1028 | Moving cache_refresh job to upgrade maintenance action: Depends on https://github.com/huggingface/datasets-server/pull/1028 | closed | 2023-04-06T16:25:27Z | 2023-04-11T07:44:32Z | 2023-04-10T12:06:56Z | AndreaFrancis |
1,657,685,220 | Disable full backfill action | Disabling backfill Job action since it was intended to run only one time.
Later, it will be a cron job (But should backfill only missing cache records in order to avoid appending too many jobs). | Disable full backfill action: Disabling backfill Job action since it was intended to run only one time.
Later, it will be a cron job (But should backfill only missing cache records in order to avoid appending too many jobs). | closed | 2023-04-06T15:59:53Z | 2023-04-07T06:58:42Z | 2023-04-07T06:55:26Z | AndreaFrancis |
1,657,638,435 | Increase resources for backfill | null | Increase resources for backfill: | closed | 2023-04-06T15:26:34Z | 2023-04-06T15:43:08Z | 2023-04-06T15:39:47Z | AndreaFrancis |
1,657,580,417 | Random access: image and audio files support | I extended the `/rows` API endpoint to also return image and audio files.
Similarly to the `first-rows` job, it writes the image and audio files to the NFS. But instead of using the `assets/` directory, it uses `cached-assets/`. The reason is that this directory is a cache: the image and audio files may be removed from time to time to free disk space.
I added a new config `CachedAssetsConfig` to define the cached assets URLs, storage directory and the cache parameters (e.g. the number of most recent rows to keep)
Because the API service now requires `datasets`, `Pillow` etc. I added those dependencies in libcommon and factorized the viewer assets and features code in libcommon in a `viewer_utils` submodule.
Finally I added tests for the regular `/rows` for both text and image cases.
## How it works
1. there's a request to an image dataset to `/rows`
2. (optional - in 5% of requests) the cached assets directory is cleaned to save disk space using a simple (?) heuristic:
* it takes a big sample of rows from the cache using glob
* it keeps the most recent ones (max 200)
* it keeps the rows below a certain index (max 100)
* it discards the rest
3. the arrow data are retrieved from the parquet files using the row indexer
4. the cached assets and URLs are created on disk using the same mechanism as the `first-rows` job
5. the response is returned and contains URLs to the cached assets
## Other details
- I updated `pip-audit` everywhere to fix issues with extras/duplicated requirements
- I introduced the `MockFileSystem` (originally from `datasets`) that can be used to test the parquet data indexing for random access | Random access: image and audio files support: I extended the `/rows` API endpoint to also return image and audio files.
Similarly to the `first-rows` job, it writes the image and audio files to the NFS. But instead of using the `assets/` directory, it uses `cached-assets/`. The reason is that this directory is a cache: the image and audio files may be removed from time to time to free disk space.
I added a new config `CachedAssetsConfig` to define the cached assets URLs, storage directory and the cache parameters (e.g. the number of most recent rows to keep)
Because the API service now requires `datasets`, `Pillow` etc. I added those dependencies in libcommon and factorized the viewer assets and features code in libcommon in a `viewer_utils` submodule.
Finally I added tests for the regular `/rows` for both text and image cases.
## How it works
1. there's a request to an image dataset to `/rows`
2. (optional - in 5% of requests) the cached assets directory is cleaned to save disk space using a simple (?) heuristic:
* it takes a big sample of rows from the cache using glob
* it keeps the most recent ones (max 200)
* it keeps the rows below a certain index (max 100)
* it discards the rest
3. the arrow data are retrieved from the parquet files using the row indexer
4. the cached assets and URLs are created on disk using the same mechanism as the `first-rows` job
5. the response is returned and contains URLs to the cached assets
## Other details
- I updated `pip-audit` everywhere to fix issues with extras/duplicated requirements
- I introduced the `MockFileSystem` (originally from `datasets`) that can be used to test the parquet data indexing for random access | closed | 2023-04-06T14:54:02Z | 2023-04-12T16:36:13Z | 2023-04-12T16:32:37Z | lhoestq |
1,657,280,589 | Create step `dataset-split-names` | See https://github.com/huggingface/datasets-server/issues/1014#issuecomment-1497597525
It aims at replacing `dataset-split-names-from-dataset-info` and `dataset-split-names-from-streaming`, now that one step can have two parents (or more)
Note the new method:
https://github.com/huggingface/datasets-server/blob/4e776c95799e23a6874ca1ebce0a1cfd7cb29e39/libs/libcommon/src/libcommon/simple_cache.py#L241-L267
It might be used in other places, for example in `service/api` instead of:
https://github.com/huggingface/datasets-server/blob/e89a249ac3dc7548517f8c516089b9f961258565/services/api/src/api/routes/endpoint.py#L52
| Create step `dataset-split-names`: See https://github.com/huggingface/datasets-server/issues/1014#issuecomment-1497597525
It aims at replacing `dataset-split-names-from-dataset-info` and `dataset-split-names-from-streaming`, now that one step can have two parents (or more)
Note the new method:
https://github.com/huggingface/datasets-server/blob/4e776c95799e23a6874ca1ebce0a1cfd7cb29e39/libs/libcommon/src/libcommon/simple_cache.py#L241-L267
It might be used in other places, for example in `service/api` instead of:
https://github.com/huggingface/datasets-server/blob/e89a249ac3dc7548517f8c516089b9f961258565/services/api/src/api/routes/endpoint.py#L52
| closed | 2023-04-06T11:53:59Z | 2023-04-06T15:09:19Z | 2023-04-06T15:06:00Z | severo |
1,656,846,713 | Dataset Viewer issue for 2030NLP/SpaCE2022 | ### Link
https://huggingface.co/datasets/2030NLP/SpaCE2022
### Description
The dataset viewer is not working for dataset 2030NLP/SpaCE2022.
Error details:
```
Error code: JobRunnerExceededMaximumDurationError
```
| Dataset Viewer issue for 2030NLP/SpaCE2022: ### Link
https://huggingface.co/datasets/2030NLP/SpaCE2022
### Description
The dataset viewer is not working for dataset 2030NLP/SpaCE2022.
Error details:
```
Error code: JobRunnerExceededMaximumDurationError
```
| closed | 2023-04-06T07:41:10Z | 2023-05-09T07:52:41Z | 2023-05-09T07:52:41Z | gitforziio |
1,656,840,475 | Dataset Viewer issue for knkarthick/dialogsum_reformat | ### Link
https://huggingface.co/datasets/knkarthick/dialogsum_reformat
### Description
The dataset viewer is not working for dataset knkarthick/dialogsum_reformat.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for knkarthick/dialogsum_reformat: ### Link
https://huggingface.co/datasets/knkarthick/dialogsum_reformat
### Description
The dataset viewer is not working for dataset knkarthick/dialogsum_reformat.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-04-06T07:36:11Z | 2023-04-06T08:11:48Z | 2023-04-06T08:11:48Z | knkarthick |
1,656,803,844 | Dataset Viewer issue for knkarthick/dialogsum_reformat | ### Link
https://huggingface.co/datasets/knkarthick/dialogsum_reformat
### Description
The dataset viewer is not working for dataset knkarthick/dialogsum_reformat.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for knkarthick/dialogsum_reformat: ### Link
https://huggingface.co/datasets/knkarthick/dialogsum_reformat
### Description
The dataset viewer is not working for dataset knkarthick/dialogsum_reformat.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-04-06T07:12:54Z | 2023-04-06T09:09:27Z | 2023-04-06T09:08:57Z | knkarthick |
1,656,145,764 | Add a parent to first-rows-from-streaming job | null | Add a parent to first-rows-from-streaming job: | closed | 2023-04-05T19:06:57Z | 2023-04-06T08:12:42Z | 2023-04-06T08:09:12Z | severo |
1,656,020,934 | Fix RowsIndex.query | Found this bug while adding tests | Fix RowsIndex.query: Found this bug while adding tests | closed | 2023-04-05T17:27:49Z | 2023-04-06T09:36:13Z | 2023-04-06T09:33:10Z | lhoestq |
1,655,816,091 | Allow multiple parent steps | A step can now have multiple parents, ie multiple steps can trigger it. Note: the ProcessingStep API has changed: parent is removed (since it's not used in the code) and requires is now a list of strings.
Related to https://github.com/huggingface/datasets-server/issues/1014 | Allow multiple parent steps: A step can now have multiple parents, ie multiple steps can trigger it. Note: the ProcessingStep API has changed: parent is removed (since it's not used in the code) and requires is now a list of strings.
Related to https://github.com/huggingface/datasets-server/issues/1014 | closed | 2023-04-05T15:11:58Z | 2023-04-05T18:46:02Z | 2023-04-05T18:43:11Z | severo |
1,654,857,509 | Dataset Viewer issue for RUCAIBox/Data-to-text-Generation | ### Link
https://huggingface.co/datasets/RUCAIBox/Data-to-text-Generation
### Description
The dataset viewer is not working for dataset RUCAIBox/Data-to-text-Generation.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for RUCAIBox/Data-to-text-Generation: ### Link
https://huggingface.co/datasets/RUCAIBox/Data-to-text-Generation
### Description
The dataset viewer is not working for dataset RUCAIBox/Data-to-text-Generation.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-04-05T03:06:05Z | 2023-04-12T09:27:13Z | 2023-04-06T08:52:50Z | hoboyu11 |
1,654,542,914 | Moving /backfill to a k8s job | First step of https://github.com/huggingface/datasets-server/issues/740
New k8s job under /jobs/cache_maintenance folder | Moving /backfill to a k8s job: First step of https://github.com/huggingface/datasets-server/issues/740
New k8s job under /jobs/cache_maintenance folder | closed | 2023-04-04T20:35:23Z | 2023-04-06T15:17:54Z | 2023-04-06T15:14:55Z | AndreaFrancis |
1,654,336,985 | Use networkx to compute the processing graph | We now rely on a specialized library to compute the graph. It will help when we allow multiple parents per step.
See https://github.com/huggingface/datasets-server/issues/1014 | Use networkx to compute the processing graph: We now rely on a specialized library to compute the graph. It will help when we allow multiple parents per step.
See https://github.com/huggingface/datasets-server/issues/1014 | closed | 2023-04-04T18:07:23Z | 2023-04-05T15:03:33Z | 2023-04-05T15:00:19Z | severo |
1,653,703,914 | Dataset Viewer issue for THUIR/T2Ranking | ### Link
https://huggingface.co/datasets/THUIR/T2Ranking
### Description
The dataset viewer is not working for dataset THUIR/T2Ranking.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for THUIR/T2Ranking: ### Link
https://huggingface.co/datasets/THUIR/T2Ranking
### Description
The dataset viewer is not working for dataset THUIR/T2Ranking.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-04T11:27:18Z | 2023-04-07T06:25:50Z | 2023-04-06T08:36:48Z | Deriq-Qian-Dong |
1,653,550,873 | Allow a step to depend on (be triggered by) several other steps | This is needed for https://github.com/huggingface/datasets-server/issues/891.
And by the way, various steps already implicitly depend on various steps (they read the value of the cache for various steps). But their refresh is only triggered by one of them.
| Allow a step to depend on (be triggered by) several other steps: This is needed for https://github.com/huggingface/datasets-server/issues/891.
And by the way, various steps already implicitly depend on various steps (they read the value of the cache for various steps). But their refresh is only triggered by one of them.
| closed | 2023-04-04T09:48:58Z | 2023-05-10T13:51:55Z | 2023-05-10T13:51:55Z | severo |
1,653,436,113 | Dataset Viewer issue for cryscan/multilingual-share | ### Link
https://huggingface.co/datasets/cryscan/multilingual-share
### Description
The dataset viewer is not working for dataset cryscan/multilingual-share.
Error details:
```
Error code: JobRunnerCrashedError
```
Maybe reset the pointer to [this file](https://huggingface.co/datasets/cryscan/multilingual-share/blob/main/sharegpt_90k_unfiltered_multilang_fixed.json)? | Dataset Viewer issue for cryscan/multilingual-share: ### Link
https://huggingface.co/datasets/cryscan/multilingual-share
### Description
The dataset viewer is not working for dataset cryscan/multilingual-share.
Error details:
```
Error code: JobRunnerCrashedError
```
Maybe reset the pointer to [this file](https://huggingface.co/datasets/cryscan/multilingual-share/blob/main/sharegpt_90k_unfiltered_multilang_fixed.json)? | closed | 2023-04-04T08:34:08Z | 2023-05-24T06:26:43Z | 2023-05-24T06:26:42Z | cryscan |
1,653,321,153 | Dataset Viewer issue for nlpconnect/DocVQA | ### Link
https://huggingface.co/datasets/nlpconnect/DocVQA
### Description
The dataset viewer is not working for dataset nlpconnect/DocVQA.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for nlpconnect/DocVQA: ### Link
https://huggingface.co/datasets/nlpconnect/DocVQA
### Description
The dataset viewer is not working for dataset nlpconnect/DocVQA.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-04-04T07:13:40Z | 2023-04-06T08:15:38Z | 2023-04-06T08:14:41Z | tbeijloos |
1,651,949,858 | Remove authentication by cookie? | Currently, to be able to return the contents for gated datasets, all the endpoints check the request credentials if needed. The accepted credentials are: HF token, HF cookie, or a JWT in `X-Api-Key`. See https://github.com/huggingface/datasets-server/blob/ecb861b5e8d728b80391f580e63c8d2cad63a1fc/services/api/src/api/authentication.py#L26
Should we remove the cookie authentication?
cc @coyotte508 @SBrandeis @XciD @rtrompier | Remove authentication by cookie?: Currently, to be able to return the contents for gated datasets, all the endpoints check the request credentials if needed. The accepted credentials are: HF token, HF cookie, or a JWT in `X-Api-Key`. See https://github.com/huggingface/datasets-server/blob/ecb861b5e8d728b80391f580e63c8d2cad63a1fc/services/api/src/api/authentication.py#L26
Should we remove the cookie authentication?
cc @coyotte508 @SBrandeis @XciD @rtrompier | closed | 2023-04-03T12:12:56Z | 2024-03-13T09:48:38Z | 2024-02-06T15:53:57Z | severo |
1,651,557,796 | Update datasets dependency to 2.11.0 version | Close #1009. | Update datasets dependency to 2.11.0 version: Close #1009. | closed | 2023-04-03T08:07:00Z | 2023-04-03T08:43:54Z | 2023-04-03T08:41:03Z | albertvillanova |
1,651,519,895 | Update datasets dependency to 2.11.0 version | After `2.11.0` datasets release, update dependencies on it. | Update datasets dependency to 2.11.0 version: After `2.11.0` datasets release, update dependencies on it. | closed | 2023-04-03T07:44:30Z | 2023-04-03T08:41:04Z | 2023-04-03T08:41:04Z | albertvillanova |
1,650,543,878 | Dataset Viewer issue for voxlingua107 | ### Link
_No response_
### Description
_No response_ | Dataset Viewer issue for voxlingua107: ### Link
_No response_
### Description
_No response_ | closed | 2023-04-01T16:05:17Z | 2023-04-03T07:33:25Z | 2023-04-03T07:31:11Z | msk-ms |
1,649,594,997 | First rows skip compute if a response already exists | Add validation on first-rows job runners to skip compute if a similar response already exists (Parallel response split-first-rows-from-streaming vs split-first-rows-from-parquet).
I moved the validation at job runner level since we have exactly same logic for split-names-from-streaming vs split-names-from-dataset-info.
Context: https://github.com/huggingface/datasets-server/pull/988#issuecomment-1491968842 | First rows skip compute if a response already exists: Add validation on first-rows job runners to skip compute if a similar response already exists (Parallel response split-first-rows-from-streaming vs split-first-rows-from-parquet).
I moved the validation at job runner level since we have exactly same logic for split-names-from-streaming vs split-names-from-dataset-info.
Context: https://github.com/huggingface/datasets-server/pull/988#issuecomment-1491968842 | closed | 2023-03-31T15:36:09Z | 2023-04-03T14:48:44Z | 2023-04-03T14:45:18Z | AndreaFrancis |
1,649,520,030 | feat: 🎸 move COMMON_LOG_LEVEL to LOG_LEVEL | This allows to define LOG_LEVEL without having to define other "common" variables like HF_ENDPOINT and HF_TOKEN
Required by https://github.com/huggingface/datasets-server/issues/994 | feat: 🎸 move COMMON_LOG_LEVEL to LOG_LEVEL: This allows to define LOG_LEVEL without having to define other "common" variables like HF_ENDPOINT and HF_TOKEN
Required by https://github.com/huggingface/datasets-server/issues/994 | closed | 2023-03-31T14:58:14Z | 2023-03-31T15:40:06Z | 2023-03-31T15:37:03Z | severo |
1,649,173,592 | Raise error when the viewer is disabled | This PR raises error when the viewer is disabled, thus making the server fail quickly instead of relying on moon-landing.
Fix #1004. | Raise error when the viewer is disabled: This PR raises error when the viewer is disabled, thus making the server fail quickly instead of relying on moon-landing.
Fix #1004. | closed | 2023-03-31T11:15:10Z | 2023-04-04T08:00:58Z | 2023-04-04T07:58:10Z | albertvillanova |
1,649,016,269 | Fail quickly when the dataset card contains `viewer: false` | The Hub shows a specific message, instead of the dataset viewer, when the dataset card contains the metadata `viewer: false`.
I think that we should take it into account when checking if a dataset is supported, and fail fast (adding this case in https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/dataset.py#L165, along to checking if the dataset is private) | Fail quickly when the dataset card contains `viewer: false`: The Hub shows a specific message, instead of the dataset viewer, when the dataset card contains the metadata `viewer: false`.
I think that we should take it into account when checking if a dataset is supported, and fail fast (adding this case in https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/dataset.py#L165, along to checking if the dataset is private) | closed | 2023-03-31T09:23:55Z | 2023-04-04T07:58:11Z | 2023-04-04T07:58:11Z | severo |
1,648,990,965 | Support 3D models | Note that the "file viewer" on the Hub already renders the 3D models (.glb) using https://modelviewer.dev/:
https://huggingface.co/datasets/allenai/objaverse/blob/main/glbs/000-000/000074a334c541878360457c672b6c2e.glb
https://user-images.githubusercontent.com/1676121/229077088-ef9bfde2-2e9a-4b12-8ac9-2d0ed3283df7.mov
related to #979 | Support 3D models: Note that the "file viewer" on the Hub already renders the 3D models (.glb) using https://modelviewer.dev/:
https://huggingface.co/datasets/allenai/objaverse/blob/main/glbs/000-000/000074a334c541878360457c672b6c2e.glb
https://user-images.githubusercontent.com/1676121/229077088-ef9bfde2-2e9a-4b12-8ac9-2d0ed3283df7.mov
related to #979 | open | 2023-03-31T09:07:00Z | 2024-06-19T14:00:56Z | null | severo |
1,648,882,763 | Update datasets to 2.11.0 | See https://github.com/huggingface/datasets/releases/tag/2.11.0
TODO: See discussions below
- [x] #1009
- [x] #1280
- [x] #1281
- [x] Use writer_batch_size for ArrowBasedBuilder
- [ ] Use direct cast from binary to Audio/Image
- [ ] Refresh datasets that use numpy.load
Useful changes for the datasets server (please complete if there are more, @huggingface/datasets)
> Use soundfile for mp3 decoding instead of torchaudio by @polinaeterna in https://github.com/huggingface/datasets/pull/5573
>
> - this allows to not have dependencies on pytorch to decode audio files
> - this was possible with soundfile 0.12 which bundles libsndfile binaries at a recent version with MP3 support
should we remove the dependency to torch and torchaudio? cc @polinaeterna
> Add writer_batch_size for ArrowBasedBuilder by @lhoestq in https://github.com/huggingface/datasets/pull/5565
> - allow to specofy the row group / record batch size when you download_and_prepare() a dataset
Needed for https://github.com/huggingface/datasets-server/pull/833 I think; cc @lhoestq
> Allow direct cast from binary to Audio/Image by @mariosasko in https://github.com/huggingface/datasets/pull/5644
Should we adapt the code in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/features.py due to that?
> Support streaming datasets with numpy.load by @albertvillanova in https://github.com/huggingface/datasets/pull/5626
should we refresh some datasets after that? | Update datasets to 2.11.0: See https://github.com/huggingface/datasets/releases/tag/2.11.0
TODO: See discussions below
- [x] #1009
- [x] #1280
- [x] #1281
- [x] Use writer_batch_size for ArrowBasedBuilder
- [ ] Use direct cast from binary to Audio/Image
- [ ] Refresh datasets that use numpy.load
Useful changes for the datasets server (please complete if there are more, @huggingface/datasets)
> Use soundfile for mp3 decoding instead of torchaudio by @polinaeterna in https://github.com/huggingface/datasets/pull/5573
>
> - this allows to not have dependencies on pytorch to decode audio files
> - this was possible with soundfile 0.12 which bundles libsndfile binaries at a recent version with MP3 support
should we remove the dependency to torch and torchaudio? cc @polinaeterna
> Add writer_batch_size for ArrowBasedBuilder by @lhoestq in https://github.com/huggingface/datasets/pull/5565
> - allow to specofy the row group / record batch size when you download_and_prepare() a dataset
Needed for https://github.com/huggingface/datasets-server/pull/833 I think; cc @lhoestq
> Allow direct cast from binary to Audio/Image by @mariosasko in https://github.com/huggingface/datasets/pull/5644
Should we adapt the code in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/features.py due to that?
> Support streaming datasets with numpy.load by @albertvillanova in https://github.com/huggingface/datasets/pull/5626
should we refresh some datasets after that? | closed | 2023-03-31T08:06:01Z | 2023-07-06T15:04:37Z | 2023-07-06T15:04:37Z | severo |
1,647,677,383 | Add total_rows in /rows response? | Should we add the number of rows in a split (eg. in field `total_rows`) in response to /rows?
It would help avoid sending a request to /size to get it.
It would also help fix a bad query.
eg: https://datasets-server.huggingface.co/rows?dataset=glue&config=ax&split=test&offset=50000&length=100 returns:
```json
{
"features": [
...
],
"rows": []
}
```
We would have to know the number of rows to fix it. | Add total_rows in /rows response?: Should we add the number of rows in a split (eg. in field `total_rows`) in response to /rows?
It would help avoid sending a request to /size to get it.
It would also help fix a bad query.
eg: https://datasets-server.huggingface.co/rows?dataset=glue&config=ax&split=test&offset=50000&length=100 returns:
```json
{
"features": [
...
],
"rows": []
}
```
We would have to know the number of rows to fix it. | closed | 2023-03-30T13:54:19Z | 2023-05-07T15:04:12Z | 2023-05-07T15:04:12Z | severo |
1,647,382,771 | Update job_runner.py | This issue resolves #854 by turning get_new_splits into an abstract method. | Update job_runner.py: This issue resolves #854 by turning get_new_splits into an abstract method. | closed | 2023-03-30T10:57:09Z | 2023-05-07T15:04:13Z | 2023-05-07T15:04:13Z | Aniket1299 |
1,647,178,269 | Use the huggingface_hub webhook server? | See https://github.com/huggingface/huggingface_hub/pull/1410
The/webhook endpoint could live in its pod with the huggingface_hub webhook server. Is it useful for our project? Feel free to comment. | Use the huggingface_hub webhook server?: See https://github.com/huggingface/huggingface_hub/pull/1410
The/webhook endpoint could live in its pod with the huggingface_hub webhook server. Is it useful for our project? Feel free to comment. | closed | 2023-03-30T08:44:49Z | 2023-06-10T15:04:09Z | 2023-06-10T15:04:09Z | severo |
1,646,385,911 | Appropiate HTTPStatus code in worker Custom Exception classes | Context @severo's comment https://github.com/huggingface/datasets-server/pull/995#discussion_r1151161546
For now we support only error 500 INTERNAL_SERVER_ERROR and 501 NOT_IMPLEMENTED error as part of custom Exception classes in worker project.
But this could be misunderstood since we manage those error in purpose and they are not a result of another request or dependent module.
We should evaluate each case and change to an appropiate HTTPStatus code.
| Appropiate HTTPStatus code in worker Custom Exception classes: Context @severo's comment https://github.com/huggingface/datasets-server/pull/995#discussion_r1151161546
For now we support only error 500 INTERNAL_SERVER_ERROR and 501 NOT_IMPLEMENTED error as part of custom Exception classes in worker project.
But this could be misunderstood since we manage those error in purpose and they are not a result of another request or dependent module.
We should evaluate each case and change to an appropiate HTTPStatus code.
| closed | 2023-03-29T19:24:25Z | 2023-08-14T15:37:00Z | 2023-08-14T15:37:00Z | AndreaFrancis |
1,645,973,778 | Dataset Viewer issue for tcor0005/langchain-docs-400-chunksize | ### Link
https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize
### Description
Hey everyone, I am trying to upload a dataset. When I upload the file and click commit, I get the message saying my dataset was committed successfully, and yet it does not appear on the dataset page and I receive this issue I try to open the dataset preview. I've tried multiple times to no success. What am I doing wrong? Or is it an issue with huggingface at the moment?
The dataset viewer is not working for dataset tcor0005/langchain-docs-400-chunksize.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for tcor0005/langchain-docs-400-chunksize: ### Link
https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize
### Description
Hey everyone, I am trying to upload a dataset. When I upload the file and click commit, I get the message saying my dataset was committed successfully, and yet it does not appear on the dataset page and I receive this issue I try to open the dataset preview. I've tried multiple times to no success. What am I doing wrong? Or is it an issue with huggingface at the moment?
The dataset viewer is not working for dataset tcor0005/langchain-docs-400-chunksize.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-03-29T14:47:07Z | 2023-03-30T06:20:07Z | 2023-03-30T06:20:06Z | TimothyCorreia-Paul |
1,645,814,099 | Updating dependencies in worker to support first rows from parquet | Preparation for https://github.com/huggingface/datasets-server/pull/988 implementation
As suggested in https://github.com/huggingface/datasets-server/pull/988#issuecomment-1487555308
I followed these steps:
- From main branch, ran make install
- poetry remove apache-beam
- Updated the dependencies in pyproject.toml for datasets, hffs, pyarrow and numpy
- Ran manually poetry update for all the packages: datasets, hffs, pyarrow and numpy | Updating dependencies in worker to support first rows from parquet: Preparation for https://github.com/huggingface/datasets-server/pull/988 implementation
As suggested in https://github.com/huggingface/datasets-server/pull/988#issuecomment-1487555308
I followed these steps:
- From main branch, ran make install
- poetry remove apache-beam
- Updated the dependencies in pyproject.toml for datasets, hffs, pyarrow and numpy
- Ran manually poetry update for all the packages: datasets, hffs, pyarrow and numpy | closed | 2023-03-29T13:26:15Z | 2023-03-29T22:05:24Z | 2023-03-29T22:02:05Z | AndreaFrancis |
1,644,536,449 | check if /splits-name-from-* exists before processing it | Closes https://github.com/huggingface/datasets-server/issues/963
Adding validation on split names - config level job runners to throw an error if the response of the same info has already been computed. | check if /splits-name-from-* exists before processing it : Closes https://github.com/huggingface/datasets-server/issues/963
Adding validation on split names - config level job runners to throw an error if the response of the same info has already been computed. | closed | 2023-03-28T19:24:05Z | 2023-03-30T19:24:14Z | 2023-03-30T19:20:45Z | AndreaFrancis |
1,644,467,099 | Job Migration should not be dependent of hf-token secret | ### Link
_No response_
### Description
Job migration should not require hf token secret :
https://github.com/huggingface/datasets-server/blob/main/chart/templates/jobs/mongodb-migration/job.yaml#L23
https://github.com/huggingface/datasets-server/blob/main/chart/templates/_envCommon.tpl#L9 | Job Migration should not be dependent of hf-token secret: ### Link
_No response_
### Description
Job migration should not require hf token secret :
https://github.com/huggingface/datasets-server/blob/main/chart/templates/jobs/mongodb-migration/job.yaml#L23
https://github.com/huggingface/datasets-server/blob/main/chart/templates/_envCommon.tpl#L9 | closed | 2023-03-28T18:29:04Z | 2023-03-31T21:06:04Z | 2023-03-31T15:37:21Z | rtrompier |
1,643,902,063 | Remove assignee from Dataset Viewer Issue template | No need to be assigned automatically to @severo. | Remove assignee from Dataset Viewer Issue template: No need to be assigned automatically to @severo. | closed | 2023-03-28T13:00:15Z | 2023-03-31T13:51:42Z | 2023-03-31T13:48:53Z | albertvillanova |
1,643,800,561 | Dataset Viewer issue for knkarthick/dialogsum_reformat | ### Link
https://huggingface.co/datasets/knkarthick/dialogsum_reformat
### Description
The dataset viewer is not working for dataset knkarthick/dialogsum_reformat.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for knkarthick/dialogsum_reformat: ### Link
https://huggingface.co/datasets/knkarthick/dialogsum_reformat
### Description
The dataset viewer is not working for dataset knkarthick/dialogsum_reformat.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-03-28T12:01:25Z | 2023-03-28T13:35:31Z | 2023-03-28T13:35:31Z | knkarthick |
1,643,588,447 | Dataset Viewer issue for wikipedia | ### Link
https://huggingface.co/datasets/wikipedia
### Description
The dataset viewer is not working for dataset wikipedia.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for wikipedia: ### Link
https://huggingface.co/datasets/wikipedia
### Description
The dataset viewer is not working for dataset wikipedia.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-03-28T09:51:03Z | 2023-05-11T12:42:30Z | 2023-05-11T12:41:24Z | Imacder |
1,642,418,622 | Support splits with dots | The split regex is defined at https://github.com/huggingface/datasets/blob/4db8e33eb9cf6cd4453cdfa246c065e0eedf170c/src/datasets/naming.py#L28, I updated the one used to parque parquet file paths to use it | Support splits with dots: The split regex is defined at https://github.com/huggingface/datasets/blob/4db8e33eb9cf6cd4453cdfa246c065e0eedf170c/src/datasets/naming.py#L28, I updated the one used to parque parquet file paths to use it | closed | 2023-03-27T16:24:14Z | 2023-03-27T18:03:12Z | 2023-03-27T18:00:12Z | lhoestq |
1,641,948,479 | Dataset Viewer issue for zeusfsx/ukrainian-news | ### Link
https://huggingface.co/datasets/zeusfsx/ukrainian-news
### Description
The dataset viewer is not working for dataset zeusfsx/ukrainian-news.
Error details:
```
Error code: JobRunnerCrashedError
```
| Dataset Viewer issue for zeusfsx/ukrainian-news: ### Link
https://huggingface.co/datasets/zeusfsx/ukrainian-news
### Description
The dataset viewer is not working for dataset zeusfsx/ukrainian-news.
Error details:
```
Error code: JobRunnerCrashedError
```
| closed | 2023-03-27T11:35:01Z | 2023-05-11T12:37:35Z | 2023-05-04T15:04:15Z | OleksandrKorovii |
1,640,142,552 | Split first rows from parquet new Job Runner | Final part of https://github.com/huggingface/datasets-server/issues/755
Based on https://github.com/huggingface/datasets-server/pull/875 for parquet reading logic
| Split first rows from parquet new Job Runner: Final part of https://github.com/huggingface/datasets-server/issues/755
Based on https://github.com/huggingface/datasets-server/pull/875 for parquet reading logic
| closed | 2023-03-24T22:40:00Z | 2023-03-31T14:00:16Z | 2023-03-31T13:57:26Z | AndreaFrancis |
1,639,911,363 | [docs] Process Parquet files | This PR adds a guide for how to process Parquet files:
- Eager dataframes with `pd/pl.read_parquet`
- Lazy dataframes with `pl.scan_parquet`
- Read and query with DuckDB | [docs] Process Parquet files: This PR adds a guide for how to process Parquet files:
- Eager dataframes with `pd/pl.read_parquet`
- Lazy dataframes with `pl.scan_parquet`
- Read and query with DuckDB | closed | 2023-03-24T19:15:26Z | 2023-04-11T16:59:55Z | 2023-04-11T16:57:08Z | stevhliu |
1,639,839,163 | feat: 🎸 remove bigcode/the-stack from the blocked datasets | because the blocked datasets are taken into account before the supported datasets, it was still ignored. | feat: 🎸 remove bigcode/the-stack from the blocked datasets: because the blocked datasets are taken into account before the supported datasets, it was still ignored. | closed | 2023-03-24T18:13:25Z | 2023-03-24T18:17:31Z | 2023-03-24T18:13:57Z | severo |
1,639,778,636 | Config-level parquet-and-dataset-info | Will solve https://github.com/huggingface/datasets-server/issues/866
Part of https://github.com/huggingface/datasets-server/issues/735
# Questions:
* Didn't I miss anything?
* Should there be any migration jobs? Technically it's a new job runner, it's not equal to the previous dataset-level one (results are not identical), and the old one doesn't exist anymore.
* Can I rename relevant environmental variables (like `PARQUET_AND_DATASET_INFO_BLOCKED_DATASETS` -> to `PARQUET_AND_INFO_BLOCKED_DATASETS`) or might it break anything?
| Config-level parquet-and-dataset-info: Will solve https://github.com/huggingface/datasets-server/issues/866
Part of https://github.com/huggingface/datasets-server/issues/735
# Questions:
* Didn't I miss anything?
* Should there be any migration jobs? Technically it's a new job runner, it's not equal to the previous dataset-level one (results are not identical), and the old one doesn't exist anymore.
* Can I rename relevant environmental variables (like `PARQUET_AND_DATASET_INFO_BLOCKED_DATASETS` -> to `PARQUET_AND_INFO_BLOCKED_DATASETS`) or might it break anything?
| closed | 2023-03-24T17:25:13Z | 2023-04-11T14:38:29Z | 2023-04-11T14:35:14Z | polinaeterna |
1,639,558,447 | Make config-level `/split-names-from-dataset-info` dependent on `config-info` instead of `dataset-info` | See https://github.com/huggingface/datasets-server/issues/864#issuecomment-1446865476 | Make config-level `/split-names-from-dataset-info` dependent on `config-info` instead of `dataset-info`: See https://github.com/huggingface/datasets-server/issues/864#issuecomment-1446865476 | closed | 2023-03-24T15:01:15Z | 2023-03-24T17:39:09Z | 2023-03-24T17:35:48Z | polinaeterna |
1,639,026,071 | feat: 🎸 add bigcode/the-stack to the supported datasets | It's an attempt to get the parquet files for this big dataset, let's see if it crashes or if it works. Asked in
https://huggingface.co/datasets/bigcode/the-stack/discussions/10.
cc @julien-c | feat: 🎸 add bigcode/the-stack to the supported datasets: It's an attempt to get the parquet files for this big dataset, let's see if it crashes or if it works. Asked in
https://huggingface.co/datasets/bigcode/the-stack/discussions/10.
cc @julien-c | closed | 2023-03-24T09:25:58Z | 2023-03-24T16:53:34Z | 2023-03-24T16:50:25Z | severo |
1,638,176,779 | Get config names from /config-names instead of /parquet-and-dataset-info for dataset-level size and parquet | Get config names from `/config-names` cache instead of `/parquet-and-dataset-info` in `dataset-size` and `dataset-parquet` (aligned with `dataset-info` https://github.com/huggingface/datasets-server/pull/962, discussed [here](https://github.com/huggingface/datasets-server/pull/962#discussion_r1146102010))
\+ move files to new directories structure (config-level to `config/`, dataset-level to `dataset/` | Get config names from /config-names instead of /parquet-and-dataset-info for dataset-level size and parquet: Get config names from `/config-names` cache instead of `/parquet-and-dataset-info` in `dataset-size` and `dataset-parquet` (aligned with `dataset-info` https://github.com/huggingface/datasets-server/pull/962, discussed [here](https://github.com/huggingface/datasets-server/pull/962#discussion_r1146102010))
\+ move files to new directories structure (config-level to `config/`, dataset-level to `dataset/` | closed | 2023-03-23T19:33:51Z | 2023-03-24T10:46:58Z | 2023-03-24T10:43:58Z | polinaeterna |
1,637,860,677 | Access docker host | - give the docker services access to the host network. It is required to be able to access a local port on the same machine,
for example.
- run 4 workers in parallel, it helps processing the jobs quicker | Access docker host: - give the docker services access to the host network. It is required to be able to access a local port on the same machine,
for example.
- run 4 workers in parallel, it helps processing the jobs quicker | closed | 2023-03-23T16:22:13Z | 2023-03-24T18:01:22Z | 2023-03-24T17:58:03Z | severo |
1,637,683,730 | Parquet files should be deleted if a dataset goes above the limit | In step /parquet-and-dataset-info, we fail fast if the dataset is too big. But we miss a corner case: if the dataset was under the limit, but is later updated and ends with a size above the limit. In that case, it already contains parquet files in the `refs/convert/parquet` branch. When the dataset is updated (webhook received, jobs launched), an error is stored in the cache. Still, the parquet files on the Hub remain unchanged, corresponding to the last working version, which means they might now be unsynchronized with the `main` branch.
| Parquet files should be deleted if a dataset goes above the limit: In step /parquet-and-dataset-info, we fail fast if the dataset is too big. But we miss a corner case: if the dataset was under the limit, but is later updated and ends with a size above the limit. In that case, it already contains parquet files in the `refs/convert/parquet` branch. When the dataset is updated (webhook received, jobs launched), an error is stored in the cache. Still, the parquet files on the Hub remain unchanged, corresponding to the last working version, which means they might now be unsynchronized with the `main` branch.
| closed | 2023-03-23T14:42:41Z | 2024-02-02T17:05:49Z | 2024-02-02T17:05:49Z | severo |
1,637,388,234 | Dataset Viewer issue for allenai/objaverse | ### Link
https://huggingface.co/datasets/allenai/objaverse
### Description
This error should be "tagged" as a server error, not a "dataset" error, and should propose to create an issue, not to open a discussion.
Also, note that we would want to retry automatically later in that case.
```
Error code: FeaturesError
Exception: HfHubHTTPError
Message: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/allenai/objaverse
Traceback: Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 465, in compute_first_rows_response
iterable_dataset = load_dataset(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1734, in load_dataset
builder_instance = load_dataset_builder(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1492, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1216, in dataset_module_factory
raise e1 from None
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1185, in dataset_module_factory
raise e
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1157, in dataset_module_factory
dataset_info = hf_api_dataset_info(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/_hf_hub_fixes.py", line 152, in dataset_info
return hf_api.dataset_info(repo_id, revision=revision, timeout=timeout, use_auth_token=use_auth_token)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 1299, in dataset_info
hf_raise_for_status(r)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 280, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/allenai/objaverse
```
<img width="1021" alt="Capture d’écran 2023-03-23 à 12 50 13" src="https://user-images.githubusercontent.com/1676121/227195409-597b3967-8488-46f0-bbd2-21b0f6e28a7d.png">
| Dataset Viewer issue for allenai/objaverse: ### Link
https://huggingface.co/datasets/allenai/objaverse
### Description
This error should be "tagged" as a server error, not a "dataset" error, and should propose to create an issue, not to open a discussion.
Also, note that we would want to retry automatically later in that case.
```
Error code: FeaturesError
Exception: HfHubHTTPError
Message: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/allenai/objaverse
Traceback: Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 465, in compute_first_rows_response
iterable_dataset = load_dataset(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1734, in load_dataset
builder_instance = load_dataset_builder(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1492, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1216, in dataset_module_factory
raise e1 from None
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1185, in dataset_module_factory
raise e
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1157, in dataset_module_factory
dataset_info = hf_api_dataset_info(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/_hf_hub_fixes.py", line 152, in dataset_info
return hf_api.dataset_info(repo_id, revision=revision, timeout=timeout, use_auth_token=use_auth_token)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 1299, in dataset_info
hf_raise_for_status(r)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 280, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/allenai/objaverse
```
<img width="1021" alt="Capture d’écran 2023-03-23 à 12 50 13" src="https://user-images.githubusercontent.com/1676121/227195409-597b3967-8488-46f0-bbd2-21b0f6e28a7d.png">
| closed | 2023-03-23T11:52:39Z | 2023-05-02T15:04:16Z | 2023-05-02T15:04:16Z | severo |
1,636,646,796 | [docs] Pandas to Polars | Sorry for the wait! This PR updates the current code examples in the Parquet [docs](https://huggingface.co/docs/datasets-server/parquet) to use Polars instead of Pandas. It also switches out the `alexandriainst/danish-wit` with the `amazon_polarity` dataset because it returned an error saying conversion is limited to datasets under 5GB.
I'll follow this up with another PR for the new Parquet guide (querying/use in web apps with duckdb) 🙂 | [docs] Pandas to Polars: Sorry for the wait! This PR updates the current code examples in the Parquet [docs](https://huggingface.co/docs/datasets-server/parquet) to use Polars instead of Pandas. It also switches out the `alexandriainst/danish-wit` with the `amazon_polarity` dataset because it returned an error saying conversion is limited to datasets under 5GB.
I'll follow this up with another PR for the new Parquet guide (querying/use in web apps with duckdb) 🙂 | closed | 2023-03-22T23:53:58Z | 2023-03-29T08:00:02Z | 2023-03-28T17:07:51Z | stevhliu |
1,636,446,573 | Kill long jobs | The executor checks if the current job is a long job (>20min) and kill it by sending a SIGTERM signal.
The queue and the cache are properly updated with errors.
close https://github.com/huggingface/datasets-server/issues/964 | Kill long jobs: The executor checks if the current job is a long job (>20min) and kill it by sending a SIGTERM signal.
The queue and the cache are properly updated with errors.
close https://github.com/huggingface/datasets-server/issues/964 | closed | 2023-03-22T20:23:44Z | 2023-03-23T13:19:51Z | 2023-03-23T13:16:31Z | lhoestq |
1,636,030,867 | An incorrect error is returned for some datasets | See https://github.com/huggingface/datasets-server/issues/975 and https://github.com/huggingface/datasets-server/issues/968.
The error is `ResponseNotFound`.
This response should never be returned, as far as I remember, but should derive in:
- dataset not found, if the dataset is not supported (private, gated without access, or a dataset that does not exist)
- or, cache is being refreshed, if the dataset is supported
| An incorrect error is returned for some datasets: See https://github.com/huggingface/datasets-server/issues/975 and https://github.com/huggingface/datasets-server/issues/968.
The error is `ResponseNotFound`.
This response should never be returned, as far as I remember, but should derive in:
- dataset not found, if the dataset is not supported (private, gated without access, or a dataset that does not exist)
- or, cache is being refreshed, if the dataset is supported
| closed | 2023-03-22T15:39:23Z | 2023-05-02T09:06:08Z | 2023-04-30T15:04:03Z | severo |
1,634,223,326 | Dataset Viewer issue for bsmock/pubtables-1m | ### Link
https://huggingface.co/datasets/bsmock/pubtables-1m
### Description
The dataset viewer is not working for dataset bsmock/pubtables-1m.
Error details:
```
Error code: ResponseNotFound
```
| Dataset Viewer issue for bsmock/pubtables-1m: ### Link
https://huggingface.co/datasets/bsmock/pubtables-1m
### Description
The dataset viewer is not working for dataset bsmock/pubtables-1m.
Error details:
```
Error code: ResponseNotFound
```
| closed | 2023-03-21T15:58:01Z | 2023-03-22T15:26:52Z | 2023-03-22T15:25:29Z | nlaird |
1,634,206,360 | Disable cache refresh job on values and dev yaml files | null | Disable cache refresh job on values and dev yaml files: | closed | 2023-03-21T15:51:04Z | 2023-03-21T16:46:12Z | 2023-03-21T16:43:18Z | AndreaFrancis |
1,634,106,381 | Separate the computation of metrics (queue, cache) from the exposition | The metrics about the queue and the cache require some processing, and queries to the MongoDB. It can take some time to process, sometimes more than 10s
<img width="522" alt="Capture d’écran 2023-03-21 à 15 55 30" src="https://user-images.githubusercontent.com/1676121/226646241-acbd6422-e962-44d1-8ea1-2973e4c391ad.png">
We should instead create the metrics with a dedicated process on one side, and just read the last metrics on the other side when the /metrics endpoint is polled.
Currently, the computation of the metrics is done live here: https://github.com/huggingface/datasets-server/blob/5c0e59f79d4a1f2f09b0b2ddef02b0b30a85f226/services/admin/src/admin/prometheus.py#L83 | Separate the computation of metrics (queue, cache) from the exposition: The metrics about the queue and the cache require some processing, and queries to the MongoDB. It can take some time to process, sometimes more than 10s
<img width="522" alt="Capture d’écran 2023-03-21 à 15 55 30" src="https://user-images.githubusercontent.com/1676121/226646241-acbd6422-e962-44d1-8ea1-2973e4c391ad.png">
We should instead create the metrics with a dedicated process on one side, and just read the last metrics on the other side when the /metrics endpoint is polled.
Currently, the computation of the metrics is done live here: https://github.com/huggingface/datasets-server/blob/5c0e59f79d4a1f2f09b0b2ddef02b0b30a85f226/services/admin/src/admin/prometheus.py#L83 | closed | 2023-03-21T14:56:58Z | 2023-05-09T07:52:17Z | 2023-05-09T07:52:17Z | severo |
1,634,073,205 | Reduce prod resources and remove cache refresh job (temporal change) | null | Reduce prod resources and remove cache refresh job (temporal change): | closed | 2023-03-21T14:39:41Z | 2023-03-21T15:08:56Z | 2023-03-21T15:05:43Z | AndreaFrancis |
1,633,900,748 | Cache Refresh - Remove hook annotations | null | Cache Refresh - Remove hook annotations: | closed | 2023-03-21T13:12:59Z | 2023-03-21T13:19:31Z | 2023-03-21T13:16:15Z | AndreaFrancis |
1,633,806,616 | Increase resources for 'all' worker | null | Increase resources for 'all' worker: | closed | 2023-03-21T12:23:25Z | 2023-03-21T12:46:30Z | 2023-03-21T12:43:33Z | AndreaFrancis |
1,633,654,253 | Reduce the number of redundant calls to the Hub in job runners | We could save a lot of requests to the Hub (done through huggingface_hub) by memoïzing the results (or using a client that resp.
See an extract of the logs for /first-rows on glue:
```
DEBUG: 2023-03-21 10:49:34,444 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:36,261 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:36,263 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:38,080 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:38,083 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:39,903 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:39,906 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:41,716 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:41,771 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:43,599 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:43,602 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:45,417 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:45,420 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:47,237 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:47,240 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:49,049 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:49,104 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:50,932 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:50,934 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:52,750 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:52,754 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:54,585 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:54,588 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:56,401 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:56,456 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:58,285 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:58,287 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:00,109 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:00,112 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:01,932 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:01,935 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:03,744 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:03,800 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:05,620 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:05,623 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:07,440 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:07,443 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:09,259 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:09,262 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:11,079 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:11,134 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:12,957 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:12,960 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:14,783 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:14,786 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:16,607 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:16,611 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:18,436 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:18,491 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:20,314 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:20,316 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:22,123 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:22,126 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:23,941 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:23,945 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:25,770 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:25,825 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:27,640 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:27,642 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
```
It makes no sense to fetch the same resources again and again. | Reduce the number of redundant calls to the Hub in job runners: We could save a lot of requests to the Hub (done through huggingface_hub) by memoïzing the results (or using a client that resp.
See an extract of the logs for /first-rows on glue:
```
DEBUG: 2023-03-21 10:49:34,444 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:36,261 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:36,263 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:38,080 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:38,083 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:39,903 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:39,906 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:41,716 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:41,771 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:43,599 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:43,602 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:45,417 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:45,420 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:47,237 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:47,240 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:49,049 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:49,104 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:50,932 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:50,934 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:52,750 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:52,754 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:54,585 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:54,588 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:56,401 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:49:56,456 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:49:58,285 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:49:58,287 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:00,109 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:00,112 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:01,932 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:01,935 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:03,744 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:03,800 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:05,620 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:05,623 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:07,440 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:07,443 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:09,259 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:09,262 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:11,079 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:11,134 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:12,957 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:12,960 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:14,783 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:14,786 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:16,607 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:16,611 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:18,436 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:18,491 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:20,314 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:20,316 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:22,123 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/glue.py HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:22,126 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:23,941 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/dataset_infos.json HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:23,945 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:25,770 - urllib3.connectionpool - https://huggingface.co:443 "HEAD /datasets/severo/glue/resolve/main/README.md HTTP/1.1" 200 0
DEBUG: 2023-03-21 10:50:25,825 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
DEBUG: 2023-03-21 10:50:27,640 - urllib3.connectionpool - https://huggingface.co:443 "GET /api/datasets/severo/glue HTTP/1.1" 200 4444
DEBUG: 2023-03-21 10:50:27,642 - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443
```
It makes no sense to fetch the same resources again and again. | closed | 2023-03-21T10:54:39Z | 2023-07-12T15:04:32Z | 2023-07-12T15:04:32Z | severo |
1,633,576,324 | Dataset Viewer issue for RUCAIBox/Open-Dialogue | ### Link
https://huggingface.co/datasets/RUCAIBox/Open-Dialogue
### Description
The dataset viewer is not working for dataset RUCAIBox/Open-Dialogue.
Error details:
```11可能会近两年
Error code: ResponseNotFound
```
| Dataset Viewer issue for RUCAIBox/Open-Dialogue: ### Link
https://huggingface.co/datasets/RUCAIBox/Open-Dialogue
### Description
The dataset viewer is not working for dataset RUCAIBox/Open-Dialogue.
Error details:
```11可能会近两年
Error code: ResponseNotFound
```
| closed | 2023-03-21T10:05:24Z | 2023-03-22T15:10:44Z | 2023-03-22T15:10:43Z | Open-ChatGPT |
1,632,887,550 | /first-rows to first-rows-from-streaming | Part of https://github.com/huggingface/datasets-server/issues/755
> Implement first-rows-from-streaming job runner at config level (It already exist as /first-rows)
`/first-rows processing` step will be renamed to `first-rows-from-streaming` and new source of split validation will be `/split-names-from-streaming` result instead of using library.
| /first-rows to first-rows-from-streaming: Part of https://github.com/huggingface/datasets-server/issues/755
> Implement first-rows-from-streaming job runner at config level (It already exist as /first-rows)
`/first-rows processing` step will be renamed to `first-rows-from-streaming` and new source of split validation will be `/split-names-from-streaming` result instead of using library.
| closed | 2023-03-20T21:25:19Z | 2023-03-23T12:04:13Z | 2023-03-23T12:00:53Z | AndreaFrancis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.