id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
โ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,845,990,087 |
full_scan is a boolean, it should not be assigned None
|
https://github.com/huggingface/datasets-server/blob/4a70eba13cc7c17be613aad88450c713c51c059f/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py#L193
|
full_scan is a boolean, it should not be assigned None: https://github.com/huggingface/datasets-server/blob/4a70eba13cc7c17be613aad88450c713c51c059f/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py#L193
|
open
|
2023-08-10T22:49:18Z
|
2023-08-10T22:49:31Z
| null |
severo
|
1,845,881,472 |
The parameters of an endpoint should not change the response format
|
The optional parameters should only change the response's content, not structure.
For example, the `length` parameter in /rows reduces the number of returned rows.
But for /parquet, for example, if we ask for the config level (https://datasets-server.huggingface.co/parquet?dataset=mnist), we get the list of features along with the list of files, while we don't have features when we only ask for the dataset level (https://datasets-server.huggingface.co/parquet?dataset=mnist&config=mnist). Also, for /info, the structure of `dataset_info` is not the same for dataset level and config level.
For /size, the fields' names and types change depending on whether the config parameter is passed or not. For example, https://datasets-server.huggingface.co/size?dataset=mnist gives `.size.configs`, while https://datasets-server.huggingface.co/size?dataset=mnist&config=mnist give `.size.config`.
Similarly, the `failed` and `pending` entries are weird. They only show for "aggregated" levels (i.e., dataset if the response is generated at config level, dataset, and config if the response is generated at split level). Currently:
- /splits, dataset level
- /parquet, dataset level
- /info, dataset level
- /size, dataset level
- /opt-in-out-urls, dataset and config levels
About "failed" and "pending", also note that their type is different depending on the endpoint. Just one example: "failed" in /splits return the error, while "failed" in /parquet return the parameters of the previous job.
Also: in parquet and info, instead of not setting "split", we set it to None (which gives `null` in JSON, instead of not having the field).
|
The parameters of an endpoint should not change the response format: The optional parameters should only change the response's content, not structure.
For example, the `length` parameter in /rows reduces the number of returned rows.
But for /parquet, for example, if we ask for the config level (https://datasets-server.huggingface.co/parquet?dataset=mnist), we get the list of features along with the list of files, while we don't have features when we only ask for the dataset level (https://datasets-server.huggingface.co/parquet?dataset=mnist&config=mnist). Also, for /info, the structure of `dataset_info` is not the same for dataset level and config level.
For /size, the fields' names and types change depending on whether the config parameter is passed or not. For example, https://datasets-server.huggingface.co/size?dataset=mnist gives `.size.configs`, while https://datasets-server.huggingface.co/size?dataset=mnist&config=mnist give `.size.config`.
Similarly, the `failed` and `pending` entries are weird. They only show for "aggregated" levels (i.e., dataset if the response is generated at config level, dataset, and config if the response is generated at split level). Currently:
- /splits, dataset level
- /parquet, dataset level
- /info, dataset level
- /size, dataset level
- /opt-in-out-urls, dataset and config levels
About "failed" and "pending", also note that their type is different depending on the endpoint. Just one example: "failed" in /splits return the error, while "failed" in /parquet return the parameters of the previous job.
Also: in parquet and info, instead of not setting "split", we set it to None (which gives `null` in JSON, instead of not having the field).
|
open
|
2023-08-10T20:49:05Z
|
2023-11-10T15:10:08Z
| null |
severo
|
1,845,839,070 |
Add a section for the missing endpoints in the doc
|
Missing documentation:
- [x] /size (dataset, config)
- [x] /info (dataset, config)
- [x] /statistics (split)
- [x] /search (split) - see #1663
- [ ] /opt-in-out-urls (dataset, config, split)
|
Add a section for the missing endpoints in the doc: Missing documentation:
- [x] /size (dataset, config)
- [x] /info (dataset, config)
- [x] /statistics (split)
- [x] /search (split) - see #1663
- [ ] /opt-in-out-urls (dataset, config, split)
|
open
|
2023-08-10T20:19:54Z
|
2024-02-06T14:57:33Z
| null |
severo
|
1,845,501,120 |
Add a section for /search in the docs
|
As the endpoint is public, we should have a section in https://huggingface.co/docs/datasets-server.
For https://huggingface.co/docs/hub/datasets-viewer, let's wait to have the search integrated into the Hub dataset viewer.
|
Add a section for /search in the docs: As the endpoint is public, we should have a section in https://huggingface.co/docs/datasets-server.
For https://huggingface.co/docs/hub/datasets-viewer, let's wait to have the search integrated into the Hub dataset viewer.
|
closed
|
2023-08-10T16:16:40Z
|
2023-09-22T11:22:04Z
|
2023-09-11T15:44:58Z
|
severo
|
1,845,472,246 |
Should we change 500 to another status code when the error comes from the dataset?
|
See #1661 for example.
Same for the "retry later" error: is 500 the most appropriate status code?
|
Should we change 500 to another status code when the error comes from the dataset?: See #1661 for example.
Same for the "retry later" error: is 500 the most appropriate status code?
|
open
|
2023-08-10T15:57:03Z
|
2023-08-14T15:36:27Z
| null |
severo
|
1,845,452,736 |
rows returns 404 instead of 500 on dataset error
|
For example, https://datasets-server.huggingface.co/rows?dataset=atomic&config=atomic&split=train returns 404, Not found. It should instead return a detailed error which helps the user debug, as it's done on all the cached responses. /rows is special, as it's created on the fly, but it should stick with the same logic: copying the previous step error:
It should return
```
500
{
"error": "Couldn't get the size of external files in `_split_generators` because a request failed:\n404 Client Error: Not Found for url: https://maartensap.com/atomic/data/atomic_data.tgz\nPlease consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).",
"cause_exception": "HTTPError",
"cause_message": "404 Client Error: Not Found for url: https://maartensap.com/atomic/data/atomic_data.tgz",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 506, in raise_if_too_big_from_external_data_files\n for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):\n",
" File \"/usr/local/lib/python3.9/multiprocessing/pool.py\", line 870, in next\n raise value\n",
" File \"/usr/local/lib/python3.9/multiprocessing/pool.py\", line 125, in worker\n result = (True, func(*args, **kwds))\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 402, in _request_size\n response.raise_for_status()\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py\", line 1021, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\n",
"requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://maartensap.com/atomic/data/atomic_data.tgz\n"
],
"copied_from_artifact": {
"kind": "config-parquet-metadata",
"dataset": "atomic",
"config": "atomic",
"split": null
}
}
```
|
rows returns 404 instead of 500 on dataset error: For example, https://datasets-server.huggingface.co/rows?dataset=atomic&config=atomic&split=train returns 404, Not found. It should instead return a detailed error which helps the user debug, as it's done on all the cached responses. /rows is special, as it's created on the fly, but it should stick with the same logic: copying the previous step error:
It should return
```
500
{
"error": "Couldn't get the size of external files in `_split_generators` because a request failed:\n404 Client Error: Not Found for url: https://maartensap.com/atomic/data/atomic_data.tgz\nPlease consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).",
"cause_exception": "HTTPError",
"cause_message": "404 Client Error: Not Found for url: https://maartensap.com/atomic/data/atomic_data.tgz",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 506, in raise_if_too_big_from_external_data_files\n for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):\n",
" File \"/usr/local/lib/python3.9/multiprocessing/pool.py\", line 870, in next\n raise value\n",
" File \"/usr/local/lib/python3.9/multiprocessing/pool.py\", line 125, in worker\n result = (True, func(*args, **kwds))\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 402, in _request_size\n response.raise_for_status()\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py\", line 1021, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\n",
"requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://maartensap.com/atomic/data/atomic_data.tgz\n"
],
"copied_from_artifact": {
"kind": "config-parquet-metadata",
"dataset": "atomic",
"config": "atomic",
"split": null
}
}
```
|
closed
|
2023-08-10T15:45:20Z
|
2023-09-04T14:26:39Z
|
2023-09-04T14:26:39Z
|
severo
|
1,844,760,755 |
Revert datasets authentication with DownloadConfig
|
Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620
Fix #1659.
|
Revert datasets authentication with DownloadConfig: Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620
Fix #1659.
|
closed
|
2023-08-10T09:17:27Z
|
2023-08-10T14:51:06Z
|
2023-08-10T14:51:05Z
|
albertvillanova
|
1,844,750,240 |
Revert datasets authentication tweaks
|
Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620
|
Revert datasets authentication tweaks: Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620
|
closed
|
2023-08-10T09:11:13Z
|
2023-08-10T14:51:07Z
|
2023-08-10T14:51:06Z
|
albertvillanova
|
1,844,097,492 |
Incremental cache metrics
|
Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
Alert message:
`The ratio of documents scanned to returned exceeded 1000.0 on datasets-server-prod-shard-00-00.ujrd0.mongodb.net, which typically suggests that un-indexed queries are being run.
`
In order to avoid this alert, this PR moves the metrics increase to simple_cache so that, every time a cache is recorded, it will update the metrics collection as well.
Note.- Job metrics will be added in another PR
|
Incremental cache metrics: Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
Alert message:
`The ratio of documents scanned to returned exceeded 1000.0 on datasets-server-prod-shard-00-00.ujrd0.mongodb.net, which typically suggests that un-indexed queries are being run.
`
In order to avoid this alert, this PR moves the metrics increase to simple_cache so that, every time a cache is recorded, it will update the metrics collection as well.
Note.- Job metrics will be added in another PR
|
closed
|
2023-08-09T22:10:37Z
|
2023-08-11T16:59:05Z
|
2023-08-11T16:59:04Z
|
AndreaFrancis
|
1,843,925,658 |
Private and inexistent datasets should return 404, not 401
|
Try
https://datasets-server.huggingface.co/splits?dataset=severo/test_private_datasets
https://datasets-server.huggingface.co/splits?dataset=severo/inexistent
in a private window. It returns 401, not 404.
|
Private and inexistent datasets should return 404, not 401: Try
https://datasets-server.huggingface.co/splits?dataset=severo/test_private_datasets
https://datasets-server.huggingface.co/splits?dataset=severo/inexistent
in a private window. It returns 401, not 404.
|
open
|
2023-08-09T20:03:01Z
|
2023-08-09T20:03:13Z
| null |
severo
|
1,843,920,147 |
Gated datasets without authentication header return 404
|
It should return 401 (Unauthorized).
See for example https://datasets-server.huggingface.co/splits?dataset=severo/bigcode/the-stack from a private window.
Or https://datasets-server.huggingface.co/splits?dataset=JosephusCheung/GuanacoDataset (if you have not access to it), while passing credentials (opening it while logged in Hugging Face will pass your cookie)
|
Gated datasets without authentication header return 404: It should return 401 (Unauthorized).
See for example https://datasets-server.huggingface.co/splits?dataset=severo/bigcode/the-stack from a private window.
Or https://datasets-server.huggingface.co/splits?dataset=JosephusCheung/GuanacoDataset (if you have not access to it), while passing credentials (opening it while logged in Hugging Face will pass your cookie)
|
open
|
2023-08-09T19:58:49Z
|
2023-08-11T16:23:59Z
| null |
severo
|
1,843,912,198 |
Give a better error message for private datasets
|
When accessing a private dataset without credentials, or with the wrong credentials, we get the same error response as for inexistent datasets, which prevent disclosing the name of private datasets:
```
{"error":"The dataset does not exist, or is not accessible without authentication (private or gated). Please check the spelling of the dataset name or retry with authentication."}
```
But when passing credentials, we get:
```
{"error":"Not found."}
```
We could give a specific message telling that the private datasets are not supported, for example. See https://github.com/huggingface/datasets-server/issues/39.
To test this, I opened https://datasets-server.huggingface.co/splits?dataset=severo/test_private_datasets in a tab, while being logged in on HF, so that the cookie is passed. This dataset is a private dataset of mine.
|
Give a better error message for private datasets: When accessing a private dataset without credentials, or with the wrong credentials, we get the same error response as for inexistent datasets, which prevent disclosing the name of private datasets:
```
{"error":"The dataset does not exist, or is not accessible without authentication (private or gated). Please check the spelling of the dataset name or retry with authentication."}
```
But when passing credentials, we get:
```
{"error":"Not found."}
```
We could give a specific message telling that the private datasets are not supported, for example. See https://github.com/huggingface/datasets-server/issues/39.
To test this, I opened https://datasets-server.huggingface.co/splits?dataset=severo/test_private_datasets in a tab, while being logged in on HF, so that the cookie is passed. This dataset is a private dataset of mine.
|
closed
|
2023-08-09T19:52:45Z
|
2024-02-02T12:29:54Z
|
2024-02-02T12:29:54Z
|
severo
|
1,843,131,265 |
move cache metrics inc to orchestrator
|
Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
ย
The ratio of documents scanned to returned exceeded 1000.0 on ย datasets-server-prod-shard-00-00.ujrd0.mongodb.net, which typically suggests that un-indexed queries are being run. To help identify which query is problematic, we recommend navigating to the Query Profiler tool within Atlas. Read more about the Profiler here.

In order to avoid this alert, this PR moves the metrics increase to orchestrator so that, every time a cache in recorded, it will update the metrics collection as well..
Caveats:
- There is no way to "decrease" the counters if a cache record is deleted. It could be implemented when doing the delete but It could increase db load.
|
move cache metrics inc to orchestrator: Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
ย
The ratio of documents scanned to returned exceeded 1000.0 on ย datasets-server-prod-shard-00-00.ujrd0.mongodb.net, which typically suggests that un-indexed queries are being run. To help identify which query is problematic, we recommend navigating to the Query Profiler tool within Atlas. Read more about the Profiler here.

In order to avoid this alert, this PR moves the metrics increase to orchestrator so that, every time a cache in recorded, it will update the metrics collection as well..
Caveats:
- There is no way to "decrease" the counters if a cache record is deleted. It could be implemented when doing the delete but It could increase db load.
|
closed
|
2023-08-09T12:27:01Z
|
2023-10-10T13:29:28Z
|
2023-08-09T15:14:19Z
|
AndreaFrancis
|
1,842,845,132 |
Update datasets to 2.14.4
|
Update datasets to 2.14.4.
Fix #1652.
Fix partially #1550.
|
Update datasets to 2.14.4: Update datasets to 2.14.4.
Fix #1652.
Fix partially #1550.
|
closed
|
2023-08-09T09:30:36Z
|
2023-08-09T15:56:27Z
|
2023-08-09T15:56:26Z
|
albertvillanova
|
1,842,829,405 |
Update datasets to 2.14.4
|
Update `datasets` to 2.14.4: https://github.com/huggingface/datasets/releases/tag/2.14.4
> Fix authentication issues by @albertvillanova in https://github.com/huggingface/datasets/pull/6127
We will be able to remove some authentication tweaks, where we had to use `download_config` instead of `token`. See:
- #1620
|
Update datasets to 2.14.4: Update `datasets` to 2.14.4: https://github.com/huggingface/datasets/releases/tag/2.14.4
> Fix authentication issues by @albertvillanova in https://github.com/huggingface/datasets/pull/6127
We will be able to remove some authentication tweaks, where we had to use `download_config` instead of `token`. See:
- #1620
|
closed
|
2023-08-09T09:21:17Z
|
2023-08-09T15:56:27Z
|
2023-08-09T15:56:27Z
|
albertvillanova
|
1,841,843,039 |
fix: ๐ TypeError: can't subtract offset-naive and offset-aware
|
https://github.com/huggingface/datasets-server/actions/runs/5798651863/job/15716898067
|
fix: ๐ TypeError: can't subtract offset-naive and offset-aware: https://github.com/huggingface/datasets-server/actions/runs/5798651863/job/15716898067
|
closed
|
2023-08-08T18:45:41Z
|
2023-08-08T18:45:47Z
|
2023-08-08T18:45:45Z
|
severo
|
1,840,500,190 |
fix: ๐ add missing volume declaration
| null |
fix: ๐ add missing volume declaration:
|
closed
|
2023-08-08T03:31:07Z
|
2023-08-08T03:31:38Z
|
2023-08-08T03:31:11Z
|
severo
|
1,840,272,977 |
feat: ๐ธ use EFS instead of NFS for parquet-metadata files
|
beware: only deploy once 1. the workers have been stopped, 2. the sync has been done from NFS to EFS
|
feat: ๐ธ use EFS instead of NFS for parquet-metadata files: beware: only deploy once 1. the workers have been stopped, 2. the sync has been done from NFS to EFS
|
closed
|
2023-08-07T22:09:12Z
|
2023-08-08T18:30:43Z
|
2023-08-08T03:25:45Z
|
severo
|
1,840,238,792 |
feat: ๐ธ install latest version of rclone
| null |
feat: ๐ธ install latest version of rclone:
|
closed
|
2023-08-07T21:35:34Z
|
2023-08-07T21:35:41Z
|
2023-08-07T21:35:40Z
|
severo
|
1,840,225,959 |
feat: ๐ธ reduce the number of cpus for storage admin
|
because no nodes can provide this in the current setup
|
feat: ๐ธ reduce the number of cpus for storage admin: because no nodes can provide this in the current setup
|
closed
|
2023-08-07T21:22:58Z
|
2023-08-07T21:23:37Z
|
2023-08-07T21:23:35Z
|
severo
|
1,840,221,377 |
build and sync only the required containers
|
For example if the worker has been updated, the API service should not be rebuild and resynchronized.
|
build and sync only the required containers: For example if the worker has been updated, the API service should not be rebuild and resynchronized.
|
open
|
2023-08-07T21:18:23Z
|
2023-08-07T21:18:34Z
| null |
severo
|
1,840,212,718 |
fix: ๐ fix the dockerfile
|
to avoid an interactive question while configuring the apt packages
|
fix: ๐ fix the dockerfile: to avoid an interactive question while configuring the apt packages
|
closed
|
2023-08-07T21:11:04Z
|
2023-08-07T21:11:09Z
|
2023-08-07T21:11:08Z
|
severo
|
1,840,190,949 |
feat: ๐ธ use rclone on storage admin with multiple cores
| null |
feat: ๐ธ use rclone on storage admin with multiple cores:
|
closed
|
2023-08-07T20:52:17Z
|
2023-08-07T20:53:05Z
|
2023-08-07T20:53:04Z
|
severo
|
1,840,161,522 |
feat: ๐ธ remove /admin/cancel-jobs/{job_type}
|
it's never used
|
feat: ๐ธ remove /admin/cancel-jobs/{job_type}: it's never used
|
closed
|
2023-08-07T20:28:40Z
|
2023-08-07T20:45:06Z
|
2023-08-07T20:45:05Z
|
severo
|
1,840,146,159 |
feat: ๐ธ add RAM to the storage admin machine
|
else: rsync crashes for lack of memory
|
feat: ๐ธ add RAM to the storage admin machine: else: rsync crashes for lack of memory
|
closed
|
2023-08-07T20:16:19Z
|
2023-08-07T20:17:06Z
|
2023-08-07T20:16:24Z
|
severo
|
1,840,138,301 |
Reduce ram for rows and search
| null |
Reduce ram for rows and search:
|
closed
|
2023-08-07T20:09:53Z
|
2023-08-07T20:10:25Z
|
2023-08-07T20:10:14Z
|
severo
|
1,840,117,873 |
feat: ๐ธ reduce RAM from 8 to 7GiB for rows and search services
|
because nodes have only 16RAM -> we want two pods per node
|
feat: ๐ธ reduce RAM from 8 to 7GiB for rows and search services: because nodes have only 16RAM -> we want two pods per node
|
closed
|
2023-08-07T19:54:13Z
|
2023-08-07T19:54:41Z
|
2023-08-07T19:54:18Z
|
severo
|
1,840,110,116 |
refactor: ๐ก change labels to lowercase programmatically
|
instead of requiring the maintainer to lowercase manually
|
refactor: ๐ก change labels to lowercase programmatically: instead of requiring the maintainer to lowercase manually
|
closed
|
2023-08-07T19:48:04Z
|
2023-08-07T19:48:13Z
|
2023-08-07T19:48:10Z
|
severo
|
1,840,088,218 |
feat: ๐ธ reduce workers, and assign more RAM to /rows
| null |
feat: ๐ธ reduce workers, and assign more RAM to /rows:
|
closed
|
2023-08-07T19:29:38Z
|
2023-08-07T19:31:45Z
|
2023-08-07T19:31:44Z
|
severo
|
1,840,005,885 |
remove locks when finishing a job
|
should fix the issue with old remaining locks, when a job is killed (too long job, after 40 minutes) while it's uploading files to the Hub (lock created with git_branch()).
also: add environment variables in docker compose and helm, add the description in readme, and fix test value
|
remove locks when finishing a job: should fix the issue with old remaining locks, when a job is killed (too long job, after 40 minutes) while it's uploading files to the Hub (lock created with git_branch()).
also: add environment variables in docker compose and helm, add the description in readme, and fix test value
|
closed
|
2023-08-07T18:27:32Z
|
2023-08-07T19:31:27Z
|
2023-08-07T19:17:51Z
|
severo
|
1,839,852,312 |
pass "endpoint" to hfh.hf_hub_download and hfh.hf_hub_url
|
once https://github.com/huggingface/huggingface_hub/pull/1580 is released.
`hf_hub_download`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server%20hf_hub_download&type=code
`hf_hub_url`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server+hf_hub_url&type=code (our local `hf_hub_url` function should disappear)
https://github.com/huggingface/datasets-server/blob/f07c09159d3304951c10062ddc485e315fb4c850/services/worker/src/worker/utils.py#L305
|
pass "endpoint" to hfh.hf_hub_download and hfh.hf_hub_url: once https://github.com/huggingface/huggingface_hub/pull/1580 is released.
`hf_hub_download`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server%20hf_hub_download&type=code
`hf_hub_url`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server+hf_hub_url&type=code (our local `hf_hub_url` function should disappear)
https://github.com/huggingface/datasets-server/blob/f07c09159d3304951c10062ddc485e315fb4c850/services/worker/src/worker/utils.py#L305
|
closed
|
2023-08-07T16:38:42Z
|
2024-02-06T14:53:46Z
|
2024-02-06T14:53:46Z
|
severo
|
1,839,841,865 |
ci: ๐ก fix the stale bot
|
The issue tags must be lowercase. P0, P1 and P2 were ignored since they were uppercase. I also refactored to make the code a bit clearer.
|
ci: ๐ก fix the stale bot: The issue tags must be lowercase. P0, P1 and P2 were ignored since they were uppercase. I also refactored to make the code a bit clearer.
|
closed
|
2023-08-07T16:32:19Z
|
2023-08-07T16:32:31Z
|
2023-08-07T16:32:30Z
|
severo
|
1,837,278,922 |
Set default env values for staging and prod - delete indixes
|
- delete indexes job will run at 00:00 for staging and prod (default in values.yaml)
- expiredTimeIntervalSeconds: 259_200 # 3 days for prod
|
Set default env values for staging and prod - delete indixes: - delete indexes job will run at 00:00 for staging and prod (default in values.yaml)
- expiredTimeIntervalSeconds: 259_200 # 3 days for prod
|
closed
|
2023-08-04T20:07:53Z
|
2023-08-04T20:08:39Z
|
2023-08-04T20:08:38Z
|
AndreaFrancis
|
1,837,263,160 |
test reduce index interval time - prod
| null |
test reduce index interval time - prod:
|
closed
|
2023-08-04T19:51:59Z
|
2023-08-04T19:53:19Z
|
2023-08-04T19:53:18Z
|
AndreaFrancis
|
1,837,126,961 |
Init duckdb storage only for delete-indexes action
|
It should not be initialized for other actions like backfill or collect-metrics.
|
Init duckdb storage only for delete-indexes action: It should not be initialized for other actions like backfill or collect-metrics.
|
closed
|
2023-08-04T17:43:14Z
|
2023-08-04T17:45:29Z
|
2023-08-04T17:45:28Z
|
AndreaFrancis
|
1,837,118,018 |
Fix delete indexes volume
| null |
Fix delete indexes volume:
|
closed
|
2023-08-04T17:34:39Z
|
2023-08-04T17:37:40Z
|
2023-08-04T17:37:39Z
|
AndreaFrancis
|
1,837,112,526 |
Fix container definition for delete-indexes job
| null |
Fix container definition for delete-indexes job:
|
closed
|
2023-08-04T17:29:14Z
|
2023-08-04T17:29:55Z
|
2023-08-04T17:29:54Z
|
AndreaFrancis
|
1,837,104,736 |
Add duckdb volume to delete-indexes k8s job
| null |
Add duckdb volume to delete-indexes k8s job:
|
closed
|
2023-08-04T17:21:46Z
|
2023-08-04T17:24:10Z
|
2023-08-04T17:24:09Z
|
AndreaFrancis
|
1,837,095,986 |
fix: unquoted env vars
| null |
fix: unquoted env vars:
|
closed
|
2023-08-04T17:14:11Z
|
2023-08-04T17:16:02Z
|
2023-08-04T17:15:50Z
|
rtrompier
|
1,837,079,857 |
Fix delete indexes job - fix cron
| null |
Fix delete indexes job - fix cron :
|
closed
|
2023-08-04T16:59:06Z
|
2023-08-04T17:00:27Z
|
2023-08-04T17:00:26Z
|
AndreaFrancis
|
1,837,065,148 |
Try to fix delete indexes job
|
Staging deploy fails with message:
```
one or more objects failed to apply, reason: CronJob in version "v1" cannot be handled as a CronJob: json: cannot unmarshal number into Go struct field EnvVar.spec.jobTemplate.spec.template.spec.containers.env.value of type string
```
|
Try to fix delete indexes job: Staging deploy fails with message:
```
one or more objects failed to apply, reason: CronJob in version "v1" cannot be handled as a CronJob: json: cannot unmarshal number into Go struct field EnvVar.spec.jobTemplate.spec.template.spec.containers.env.value of type string
```
|
closed
|
2023-08-04T16:45:27Z
|
2023-08-04T16:52:57Z
|
2023-08-04T16:52:56Z
|
AndreaFrancis
|
1,837,048,151 |
Try to fix delete indexes job
| null |
Try to fix delete indexes job:
|
closed
|
2023-08-04T16:30:38Z
|
2023-08-04T16:31:31Z
|
2023-08-04T16:31:30Z
|
AndreaFrancis
|
1,837,035,385 |
Increase chart version because of new job
| null |
Increase chart version because of new job:
|
closed
|
2023-08-04T16:22:53Z
|
2023-08-04T16:23:45Z
|
2023-08-04T16:23:44Z
|
AndreaFrancis
|
1,837,023,674 |
Enable delete-indexes job to run every 10 minutes
|
In order to try correct functionality of delete-index job, I would like to test every 10 minutes.
Then I will remove this schedule and will keep as default (once a day).
|
Enable delete-indexes job to run every 10 minutes: In order to try correct functionality of delete-index job, I would like to test every 10 minutes.
Then I will remove this schedule and will keep as default (once a day).
|
closed
|
2023-08-04T16:13:33Z
|
2023-08-04T16:16:17Z
|
2023-08-04T16:16:16Z
|
AndreaFrancis
|
1,836,574,648 |
Fix e2e test_16_statistics
|
Fix e2e `test_16_statistics`. as this file was added after branch (https://github.com/huggingface/datasets-server/tree/update-datasets-2.14) was created from main.
This fix is necessary after the refactoring introduced by:
- #1616
|
Fix e2e test_16_statistics: Fix e2e `test_16_statistics`. as this file was added after branch (https://github.com/huggingface/datasets-server/tree/update-datasets-2.14) was created from main.
This fix is necessary after the refactoring introduced by:
- #1616
|
closed
|
2023-08-04T11:26:45Z
|
2023-08-04T11:40:55Z
|
2023-08-04T11:40:54Z
|
albertvillanova
|
1,836,527,064 |
Update datasets dependency to 2.14
|
Update datasets dependency to 2.14 and fix related issues.
Fix #1589.
|
Update datasets dependency to 2.14: Update datasets dependency to 2.14 and fix related issues.
Fix #1589.
|
closed
|
2023-08-04T10:58:41Z
|
2023-08-04T15:47:34Z
|
2023-08-04T12:57:08Z
|
albertvillanova
|
1,836,456,603 |
Fix authentication with DownloadConfig
|
Fix authentication by passing `DownloadConfig` with `token`.
Fix partially #1589.
|
Fix authentication with DownloadConfig: Fix authentication by passing `DownloadConfig` with `token`.
Fix partially #1589.
|
closed
|
2023-08-04T10:08:52Z
|
2023-08-04T10:55:44Z
|
2023-08-04T10:55:44Z
|
albertvillanova
|
1,836,352,537 |
Fix HfFileSystem
|
Fix usage of `HfFileSystem` (instead of `HTTPFileSystem`) and filename format of `data_files`.
Additionally, fix `fill_builder_info` with additional builder information: `builder_name`, `dataset_name`, `config_name` and `version`.
Fix partially #1589.
|
Fix HfFileSystem: Fix usage of `HfFileSystem` (instead of `HTTPFileSystem`) and filename format of `data_files`.
Additionally, fix `fill_builder_info` with additional builder information: `builder_name`, `dataset_name`, `config_name` and `version`.
Fix partially #1589.
|
closed
|
2023-08-04T08:58:41Z
|
2023-08-04T09:36:41Z
|
2023-08-04T09:36:40Z
|
albertvillanova
|
1,835,781,976 |
feat: add search field to /is-valid
|
Adding `search `new field to /is-valid response, this new field should help UI identify if search is available for split viewer.
Previously, /is-valid was only available at dataset level, adding split and config level to have better granularity.
New job runners:
- split-is-valid
- config-is-valid
|
feat: add search field to /is-valid: Adding `search `new field to /is-valid response, this new field should help UI identify if search is available for split viewer.
Previously, /is-valid was only available at dataset level, adding split and config level to have better granularity.
New job runners:
- split-is-valid
- config-is-valid
|
closed
|
2023-08-03T21:54:39Z
|
2023-08-08T13:57:04Z
|
2023-08-08T13:57:03Z
|
AndreaFrancis
|
1,835,708,656 |
Fix cached filenames
|
Fix the filename of cached files, so that it contains the `builder.dataset_name` instead of `builder.name`.
Fix partially #1589.
|
Fix cached filenames: Fix the filename of cached files, so that it contains the `builder.dataset_name` instead of `builder.name`.
Fix partially #1589.
|
closed
|
2023-08-03T20:41:29Z
|
2023-08-04T06:55:54Z
|
2023-08-04T06:55:53Z
|
albertvillanova
|
1,835,440,915 |
Fix default config name
|
Fix default config name and refactor:
- the function no longer uses the argument `dataset`
- it returns a 2-tuple
Fix partially #1589.
|
Fix default config name: Fix default config name and refactor:
- the function no longer uses the argument `dataset`
- it returns a 2-tuple
Fix partially #1589.
|
closed
|
2023-08-03T17:15:04Z
|
2023-08-03T19:46:19Z
|
2023-08-03T19:46:18Z
|
albertvillanova
|
1,835,299,373 |
Increase replicas for all worker
| null |
Increase replicas for all worker:
|
closed
|
2023-08-03T15:36:56Z
|
2023-08-03T15:37:56Z
|
2023-08-03T15:37:55Z
|
AndreaFrancis
|
1,835,278,772 |
Update datasets 2.14.3
|
Update `datasets` dependency to version 2.14.3, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
- https://github.com/huggingface/datasets/pull/6105
- https://github.com/huggingface/datasets/pull/6107
We are merging this PR into a dedicated branch:
- https://github.com/huggingface/datasets-server/tree/update-datasets-2.14
This way we can merge subsequent required PRs into the same branch to fix the CI. Once the CI is green, we can merge the branch into main
Fix partially #1589.
Fix partially #1550.
Supersede and close #1577.
Supersede and close #1588.
|
Update datasets 2.14.3: Update `datasets` dependency to version 2.14.3, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
- https://github.com/huggingface/datasets/pull/6105
- https://github.com/huggingface/datasets/pull/6107
We are merging this PR into a dedicated branch:
- https://github.com/huggingface/datasets-server/tree/update-datasets-2.14
This way we can merge subsequent required PRs into the same branch to fix the CI. Once the CI is green, we can merge the branch into main
Fix partially #1589.
Fix partially #1550.
Supersede and close #1577.
Supersede and close #1588.
|
closed
|
2023-08-03T15:24:11Z
|
2023-08-03T16:30:56Z
|
2023-08-03T16:18:54Z
|
albertvillanova
|
1,835,234,117 |
In config-parquet-metadata, delete the old files before uploading new ones
|
I'm currently moving the parquet-metadata files to a new storage, and I see strange numbering for shards. See for example, in the same directory (35456 parquet files for this one):
```
root@prod-datasets-server-storage-admin-786cfbf44-4ncpv:/storage# ls parquet-metadata-new/Antreas/TALI-large-2/--/Antreas--TALI-large-2/ | tail
parquet-train-00059-of-00085.parquet
parquet-train-00059-of-00086.parquet
parquet-train-00059-of-00087.parquet
parquet-train-00059-of-00088.parquet
parquet-train-00059-of-00089.parquet
parquet-train-00059-of-00090.parquet
parquet-train-00059-of-00092.parquet
parquet-train-00059-of-00093.parquet
parquet-train-00059-of-00094.parquet
parquet-train-00059-of-00095.parquet
root@prod-datasets-server-storage-admin-786cfbf44-4ncpv:/storage# ls parquet-metadata-new/Antreas/TALI-large-2/--/Antreas--TALI-large-2/ | tail
parquet-train-00072-of-00257.parquet
parquet-train-00072-of-00258.parquet
parquet-train-00072-of-00259.parquet
parquet-train-00072-of-00260.parquet
parquet-train-00072-of-00261.parquet
parquet-train-00072-of-00262.parquet
parquet-train-00072-of-00264.parquet
parquet-train-00072-of-00265.parquet
parquet-train-00072-of-00266.parquet
parquet-train-00072-of-00267.parquet
```
Is it normal @lhoestq? I think the second number should never change for a given split, right?
|
In config-parquet-metadata, delete the old files before uploading new ones: I'm currently moving the parquet-metadata files to a new storage, and I see strange numbering for shards. See for example, in the same directory (35456 parquet files for this one):
```
root@prod-datasets-server-storage-admin-786cfbf44-4ncpv:/storage# ls parquet-metadata-new/Antreas/TALI-large-2/--/Antreas--TALI-large-2/ | tail
parquet-train-00059-of-00085.parquet
parquet-train-00059-of-00086.parquet
parquet-train-00059-of-00087.parquet
parquet-train-00059-of-00088.parquet
parquet-train-00059-of-00089.parquet
parquet-train-00059-of-00090.parquet
parquet-train-00059-of-00092.parquet
parquet-train-00059-of-00093.parquet
parquet-train-00059-of-00094.parquet
parquet-train-00059-of-00095.parquet
root@prod-datasets-server-storage-admin-786cfbf44-4ncpv:/storage# ls parquet-metadata-new/Antreas/TALI-large-2/--/Antreas--TALI-large-2/ | tail
parquet-train-00072-of-00257.parquet
parquet-train-00072-of-00258.parquet
parquet-train-00072-of-00259.parquet
parquet-train-00072-of-00260.parquet
parquet-train-00072-of-00261.parquet
parquet-train-00072-of-00262.parquet
parquet-train-00072-of-00264.parquet
parquet-train-00072-of-00265.parquet
parquet-train-00072-of-00266.parquet
parquet-train-00072-of-00267.parquet
```
Is it normal @lhoestq? I think the second number should never change for a given split, right?
|
closed
|
2023-08-03T14:57:37Z
|
2024-02-02T17:05:46Z
|
2024-02-02T17:05:45Z
|
severo
|
1,833,974,974 |
test: add basic e2e for /statistics
| null |
test: add basic e2e for /statistics:
|
closed
|
2023-08-02T22:19:23Z
|
2023-08-03T15:12:25Z
|
2023-08-03T15:12:24Z
|
AndreaFrancis
|
1,833,924,953 |
Increase resources
|
Currently we have 365K jobs waiting, might help flush the queue
|
Increase resources: Currently we have 365K jobs waiting, might help flush the queue
|
closed
|
2023-08-02T21:26:40Z
|
2023-08-02T21:27:39Z
|
2023-08-02T21:27:38Z
|
AndreaFrancis
|
1,833,900,598 |
fix: ๐ fix docker image name
| null |
fix: ๐ fix docker image name:
|
closed
|
2023-08-02T21:02:48Z
|
2023-08-02T21:02:54Z
|
2023-08-02T21:02:53Z
|
severo
|
1,833,887,301 |
feat: ๐ธ build a Docker image for storageAdmin to have rsync
|
I also add curl and wget
|
feat: ๐ธ build a Docker image for storageAdmin to have rsync: I also add curl and wget
|
closed
|
2023-08-02T20:50:50Z
|
2023-08-02T20:51:57Z
|
2023-08-02T20:51:56Z
|
severo
|
1,833,879,955 |
fix: /search - set cache directory when download index
|
Currently /search is throwing error:
```
File "/src/services/search/src/search/routes/search.py", line 91, in download_index_file
hf_hub_download(
File "/src/services/search/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/services/search/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1159, in hf_hub_download
os.makedirs(storage_folder, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/.cache'
DEBUG: 2023-08-02 20:26:46,189 - root - Unexpected error.
```
|
fix: /search - set cache directory when download index: Currently /search is throwing error:
```
File "/src/services/search/src/search/routes/search.py", line 91, in download_index_file
hf_hub_download(
File "/src/services/search/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/services/search/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1159, in hf_hub_download
os.makedirs(storage_folder, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/.cache'
DEBUG: 2023-08-02 20:26:46,189 - root - Unexpected error.
```
|
closed
|
2023-08-02T20:43:54Z
|
2023-08-02T21:00:26Z
|
2023-08-02T21:00:24Z
|
AndreaFrancis
|
1,833,814,325 |
feat: ๐ธ mount EFS storage for parquet-metadata on storage-admin
| null |
feat: ๐ธ mount EFS storage for parquet-metadata on storage-admin:
|
closed
|
2023-08-02T19:49:03Z
|
2023-08-02T19:51:42Z
|
2023-08-02T19:51:41Z
|
severo
|
1,833,754,005 |
Terminate worker pods quicker
|
Sometimes, when we deploy to prod, the sync is blocked by the worker pods termination, which can take up to 30 minutes! See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1690990164606589?thread_ts=1690989017.253219&cid=C04L6P8KNQ5 (internal)
Ideally, when the pod receives SIGKILL, it should exit in the next few seconds.
|
Terminate worker pods quicker: Sometimes, when we deploy to prod, the sync is blocked by the worker pods termination, which can take up to 30 minutes! See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1690990164606589?thread_ts=1690989017.253219&cid=C04L6P8KNQ5 (internal)
Ideally, when the pod receives SIGKILL, it should exit in the next few seconds.
|
closed
|
2023-08-02T19:06:50Z
|
2024-02-06T14:52:28Z
|
2024-02-06T14:52:28Z
|
severo
|
1,833,406,655 |
feat: ๐ธ optimize the computation of metrics
|
fixes #1604.
It seems like I had to first sort by the fields.
|
feat: ๐ธ optimize the computation of metrics: fixes #1604.
It seems like I had to first sort by the fields.
|
closed
|
2023-08-02T15:27:00Z
|
2023-08-02T16:08:27Z
|
2023-08-02T16:08:26Z
|
severo
|
1,833,373,493 |
Optimize mongo query
|
The metrics about the cache entries is a list of tuples `(kind, http_status, error_code, count)`.
Currently we compute with:
https://github.com/huggingface/datasets-server/blob/deb708ae737a2f8da51b74c1ca4a489c4ff39b51/libs/libcommon/src/libcommon/simple_cache.py#L490-L506
These queries are very slow (eg each `entries(kind=kind, http_status=http_status).distinct("error_code")` command can take about 10s)
An alternative is to run an aggregation:
```python
def format_group(group: Dict[str, Any]) -> CountEntry:
kind = group["kind"]
if not isinstance(kind, str):
raise TypeError("kind must be a str")
http_status = group["http_status"]
if not isinstance(http_status, int):
raise TypeError("http_status must be an int")
error_code = group["error_code"]
if not isinstance(error_code, str) and error_code is not None:
raise TypeError("error_code must be a str or None")
count = group["count"]
if not isinstance(count, int):
raise TypeError("count must be an int")
return {"kind": kind, "http_status": http_status, "error_code": error_code, "count": count}
def get_responses_count_by_kind_status_and_error_code() -> List[CountEntry]:
groups = CachedResponseDocument.objects().aggregate(
[
{
"$group": {
"_id": {"kind": "$kind", "http_status": "$http_status", "error_code": "$error_code"},
"count": {"$sum": 1},
}
},
{
"$project": {
"kind": "$_id.kind",
"http_status": "$_id.http_status",
"error_code": "$_id.error_code",
"count": "$count",
}
},
]
)
return [format_group(group) for group in groups]
```
but from my tests on the prod database, it does not compute quickly either (3 minutes).
Note that we have the `(kind, http_status, error_code)` index in the database:
<img width="1487" alt="Capture dโeฬcran 2023-08-02 aฬ 11 07 00" src="https://github.com/huggingface/datasets-server/assets/1676121/87d587e4-1051-455f-b4ec-ee9f55a47b40">
|
Optimize mongo query: The metrics about the cache entries is a list of tuples `(kind, http_status, error_code, count)`.
Currently we compute with:
https://github.com/huggingface/datasets-server/blob/deb708ae737a2f8da51b74c1ca4a489c4ff39b51/libs/libcommon/src/libcommon/simple_cache.py#L490-L506
These queries are very slow (eg each `entries(kind=kind, http_status=http_status).distinct("error_code")` command can take about 10s)
An alternative is to run an aggregation:
```python
def format_group(group: Dict[str, Any]) -> CountEntry:
kind = group["kind"]
if not isinstance(kind, str):
raise TypeError("kind must be a str")
http_status = group["http_status"]
if not isinstance(http_status, int):
raise TypeError("http_status must be an int")
error_code = group["error_code"]
if not isinstance(error_code, str) and error_code is not None:
raise TypeError("error_code must be a str or None")
count = group["count"]
if not isinstance(count, int):
raise TypeError("count must be an int")
return {"kind": kind, "http_status": http_status, "error_code": error_code, "count": count}
def get_responses_count_by_kind_status_and_error_code() -> List[CountEntry]:
groups = CachedResponseDocument.objects().aggregate(
[
{
"$group": {
"_id": {"kind": "$kind", "http_status": "$http_status", "error_code": "$error_code"},
"count": {"$sum": 1},
}
},
{
"$project": {
"kind": "$_id.kind",
"http_status": "$_id.http_status",
"error_code": "$_id.error_code",
"count": "$count",
}
},
]
)
return [format_group(group) for group in groups]
```
but from my tests on the prod database, it does not compute quickly either (3 minutes).
Note that we have the `(kind, http_status, error_code)` index in the database:
<img width="1487" alt="Capture dโeฬcran 2023-08-02 aฬ 11 07 00" src="https://github.com/huggingface/datasets-server/assets/1676121/87d587e4-1051-455f-b4ec-ee9f55a47b40">
|
closed
|
2023-08-02T15:07:44Z
|
2023-08-02T16:08:27Z
|
2023-08-02T16:08:27Z
|
severo
|
1,833,251,270 |
fix: ๐ fix vulnerability in cryptography
| null |
fix: ๐ fix vulnerability in cryptography:
|
closed
|
2023-08-02T14:00:42Z
|
2023-08-02T14:16:24Z
|
2023-08-02T14:16:23Z
|
severo
|
1,833,221,964 |
Parallel steps update incoherence
|
See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6
Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` error.
But after the dataset update, the `split-first-rows-from-parquet` response was an error (due to a disk issue: ` FileSystemError`) and, due to a heavy load on the infra, the `split-first-rows-from-streaming` response has not been processed yet, so: it's still `ResponseAlreadyComputedError`.
Possibilities:
1. remove `ResponseAlreadyComputedError`, and copy the response (doubles storage)
2. change the model for parallel steps, and store only once. Let's say we have M+N parallel steps. If M steps are successful (normally with the same response) and N steps are erroneous, let's store the optional successful response content once, and all the responses, removing the success content for successful responses. It is a lot of complexity.
3. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, copy the successful answer to the other step. Seems brittle and overly complex.
4. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, delete the other answer
None seems like a good idea. Do you have better ideas @huggingface/datasets-server ?
|
Parallel steps update incoherence: See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6
Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` error.
But after the dataset update, the `split-first-rows-from-parquet` response was an error (due to a disk issue: ` FileSystemError`) and, due to a heavy load on the infra, the `split-first-rows-from-streaming` response has not been processed yet, so: it's still `ResponseAlreadyComputedError`.
Possibilities:
1. remove `ResponseAlreadyComputedError`, and copy the response (doubles storage)
2. change the model for parallel steps, and store only once. Let's say we have M+N parallel steps. If M steps are successful (normally with the same response) and N steps are erroneous, let's store the optional successful response content once, and all the responses, removing the success content for successful responses. It is a lot of complexity.
3. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, copy the successful answer to the other step. Seems brittle and overly complex.
4. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, delete the other answer
None seems like a good idea. Do you have better ideas @huggingface/datasets-server ?
|
closed
|
2023-08-02T13:44:35Z
|
2024-02-06T14:52:06Z
|
2024-02-06T14:52:05Z
|
severo
|
1,832,155,278 |
refactor: ๐ก clean the chart variables
|
- ensure coherence in the names of the chart variables
- don't provide cached-assets to admin service (it does not use it)
- remove code related to cached assets in api service (it has moved to rows)
Note: I'm not sure we're allowed to put helper templates in subdirectories (_volumes/, etc). When deployed I'll try on staging, and move them back to the templates/ root directory if it failes.
|
refactor: ๐ก clean the chart variables: - ensure coherence in the names of the chart variables
- don't provide cached-assets to admin service (it does not use it)
- remove code related to cached assets in api service (it has moved to rows)
Note: I'm not sure we're allowed to put helper templates in subdirectories (_volumes/, etc). When deployed I'll try on staging, and move them back to the templates/ root directory if it failes.
|
closed
|
2023-08-01T23:12:20Z
|
2023-08-02T15:09:48Z
|
2023-08-02T15:09:47Z
|
severo
|
1,832,074,393 |
feat: ๐ธ increment the chart version
| null |
feat: ๐ธ increment the chart version:
|
closed
|
2023-08-01T21:43:07Z
|
2023-08-01T21:43:53Z
|
2023-08-01T21:43:52Z
|
severo
|
1,832,071,814 |
fix: ๐ fix statistics volume
|
also (unrelated) increase resources
|
fix: ๐ fix statistics volume: also (unrelated) increase resources
|
closed
|
2023-08-01T21:40:45Z
|
2023-08-01T21:41:29Z
|
2023-08-01T21:41:28Z
|
severo
|
1,832,042,159 |
feat: ๐ธ give more RAM to backfill script
| null |
feat: ๐ธ give more RAM to backfill script:
|
closed
|
2023-08-01T21:12:28Z
|
2023-08-01T21:13:11Z
|
2023-08-01T21:12:33Z
|
severo
|
1,832,005,634 |
feat: ๐ธ fix temporarily the backfill cron
| null |
feat: ๐ธ fix temporarily the backfill cron:
|
closed
|
2023-08-01T20:46:15Z
|
2023-08-01T20:53:05Z
|
2023-08-01T20:53:04Z
|
severo
|
1,831,996,035 |
Fix backfill job
| null |
Fix backfill job:
|
closed
|
2023-08-01T20:38:44Z
|
2023-08-01T20:41:22Z
|
2023-08-01T20:41:20Z
|
severo
|
1,831,961,844 |
Fix descriptive statistics env var
| null |
Fix descriptive statistics env var:
|
closed
|
2023-08-01T20:11:49Z
|
2023-08-01T20:38:49Z
|
2023-08-01T20:38:48Z
|
severo
|
1,831,951,591 |
split-duckdb-index fix: id from 0 and enable parquet 5G
|
- Fix serial minvalue for comment https://github.com/huggingface/datasets-server/pull/1516#discussion_r1276696053
- Enable index datasets with parquet under 5G
|
split-duckdb-index fix: id from 0 and enable parquet 5G: - Fix serial minvalue for comment https://github.com/huggingface/datasets-server/pull/1516#discussion_r1276696053
- Enable index datasets with parquet under 5G
|
closed
|
2023-08-01T20:05:48Z
|
2023-08-01T20:27:01Z
|
2023-08-01T20:27:00Z
|
AndreaFrancis
|
1,831,878,513 |
feat: ๐ธ cron every 4 hours (my calculation was wrong)
| null |
feat: ๐ธ cron every 4 hours (my calculation was wrong):
|
closed
|
2023-08-01T19:17:14Z
|
2023-08-01T19:17:42Z
|
2023-08-01T19:17:19Z
|
severo
|
1,831,869,462 |
feat: ๐ธ increase rate of backfill
| null |
feat: ๐ธ increase rate of backfill:
|
closed
|
2023-08-01T19:10:22Z
|
2023-08-01T19:10:52Z
|
2023-08-01T19:10:32Z
|
severo
|
1,831,331,518 |
Should we convert the datasets to other formats than parquet?
|
One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c
|
Should we convert the datasets to other formats than parquet?: One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c
|
closed
|
2023-08-01T13:47:12Z
|
2024-06-19T14:19:01Z
|
2024-06-19T14:19:01Z
|
severo
|
1,830,070,827 |
feat: ๐ธ adapt examples to new format of the API response
| null |
feat: ๐ธ adapt examples to new format of the API response:
|
closed
|
2023-07-31T21:35:38Z
|
2023-07-31T21:41:14Z
|
2023-07-31T21:40:44Z
|
severo
|
1,828,545,166 |
Update datasets dependency to 2.14
|
TODO:
- [x] #1614
- [x] #1616
- [x] #1617
- [x] #1619
- [x] #1620
|
Update datasets dependency to 2.14: TODO:
- [x] #1614
- [x] #1616
- [x] #1617
- [x] #1619
- [x] #1620
|
closed
|
2023-07-31T07:06:37Z
|
2023-08-04T12:57:09Z
|
2023-08-04T12:57:09Z
|
albertvillanova
|
1,828,539,095 |
Update datasets dependency to 2.14.2 version
|
Update `datasets` dependency to version 2.14.2, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
Fix #1589.
Fix partially #1550.
Supersede and close #1577.
|
Update datasets dependency to 2.14.2 version: Update `datasets` dependency to version 2.14.2, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
Fix #1589.
Fix partially #1550.
Supersede and close #1577.
|
closed
|
2023-07-31T07:02:18Z
|
2024-01-26T09:07:29Z
|
2023-08-07T09:07:37Z
|
albertvillanova
|
1,826,848,253 |
All the workers were blocked because of a single lock entry
|
Unfortunately, I deleted the faulty lock entry, and I don't remember its value.
But every worker was trying to start the same job, but none could acquire the lock, thus, they all looped on the same job.
|
All the workers were blocked because of a single lock entry: Unfortunately, I deleted the faulty lock entry, and I don't remember its value.
But every worker was trying to start the same job, but none could acquire the lock, thus, they all looped on the same job.
|
closed
|
2023-07-28T18:01:40Z
|
2023-08-07T19:40:51Z
|
2023-08-07T19:40:51Z
|
severo
|
1,826,842,280 |
Revert logs and 'dns config revert'
| null |
Revert logs and 'dns config revert':
|
closed
|
2023-07-28T17:56:14Z
|
2023-07-28T17:56:56Z
|
2023-07-28T17:56:21Z
|
severo
|
1,826,831,733 |
feat: ๐ธ set log level to debug in prod
| null |
feat: ๐ธ set log level to debug in prod:
|
closed
|
2023-07-28T17:46:38Z
|
2023-07-28T17:47:17Z
|
2023-07-28T17:46:44Z
|
severo
|
1,826,807,446 |
Revert "feat: ๐ธ reduce the number of DNS requests (#1581)"
|
This reverts commit ad754cda0c26bf7d609292853f0c1681380a882e.
|
Revert "feat: ๐ธ reduce the number of DNS requests (#1581)": This reverts commit ad754cda0c26bf7d609292853f0c1681380a882e.
|
closed
|
2023-07-28T17:28:04Z
|
2023-07-28T17:28:33Z
|
2023-07-28T17:28:11Z
|
severo
|
1,826,801,599 |
Rights error for statistics step
|
```
INFO: 2023-07-28 16:40:10,399 - root - [split-descriptive-statistics] compute JobManager(job_id=64c3ef6a6c181e70c1093bb7 dataset=Melanit/testsetneuraluma job_info={'job_id': '64c3ef6a6c181e70c1093bb7', 'type': 'split-descriptive-statistics', 'params': {'dataset': 'Melanit/testsetneuraluma', 'revision': '406baec5a6d9379698896d92041f8516ffbb6ba3', 'config': 'Melanit--testsetneuraluma', 'split': 'exampledataset'}, 'priority': <Priority.NORMAL: 'normal'>, 'difficulty': 70}
ERROR: 2023-07-28 16:40:10,401 - root - [Errno 13] Permission denied: '/stats-cache/21756284577990-split-descriptive-statistics-Melanit-testsetneura-aff1fad6'
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_manager.py", line 152, in process
self.job_runner.pre_compute()
File "/src/services/worker/src/worker/job_runners/_job_runner_with_cache.py", line 56, in pre_compute
self.cache_subdirectory = Path(init_dir(new_directory))
File "/src/libs/libcommon/src/libcommon/storage.py", line 39, in init_dir
makedirs(directory, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/stats-cache/21756284577990-split-descriptive-statistics-Melanit-testsetneura-aff1fad6'
```
|
Rights error for statistics step: ```
INFO: 2023-07-28 16:40:10,399 - root - [split-descriptive-statistics] compute JobManager(job_id=64c3ef6a6c181e70c1093bb7 dataset=Melanit/testsetneuraluma job_info={'job_id': '64c3ef6a6c181e70c1093bb7', 'type': 'split-descriptive-statistics', 'params': {'dataset': 'Melanit/testsetneuraluma', 'revision': '406baec5a6d9379698896d92041f8516ffbb6ba3', 'config': 'Melanit--testsetneuraluma', 'split': 'exampledataset'}, 'priority': <Priority.NORMAL: 'normal'>, 'difficulty': 70}
ERROR: 2023-07-28 16:40:10,401 - root - [Errno 13] Permission denied: '/stats-cache/21756284577990-split-descriptive-statistics-Melanit-testsetneura-aff1fad6'
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_manager.py", line 152, in process
self.job_runner.pre_compute()
File "/src/services/worker/src/worker/job_runners/_job_runner_with_cache.py", line 56, in pre_compute
self.cache_subdirectory = Path(init_dir(new_directory))
File "/src/libs/libcommon/src/libcommon/storage.py", line 39, in init_dir
makedirs(directory, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/stats-cache/21756284577990-split-descriptive-statistics-Melanit-testsetneura-aff1fad6'
```
|
closed
|
2023-07-28T17:24:51Z
|
2023-09-04T11:36:30Z
|
2023-09-04T11:36:30Z
|
severo
|
1,826,745,616 |
PreviousStepFormatError on sil-ai/bloom-speech
|
For a lot of configs in https://huggingface.co/datasets/sil-ai/bloom-speech, we get PreviousStepFormatError.
<img width="1013" alt="Capture dโeฬcran 2023-07-28 aฬ 12 46 09" src="https://github.com/huggingface/datasets-server/assets/1676121/f2160866-ce78-4654-8e3d-ea1396d1b23e">
|
PreviousStepFormatError on sil-ai/bloom-speech: For a lot of configs in https://huggingface.co/datasets/sil-ai/bloom-speech, we get PreviousStepFormatError.
<img width="1013" alt="Capture dโeฬcran 2023-07-28 aฬ 12 46 09" src="https://github.com/huggingface/datasets-server/assets/1676121/f2160866-ce78-4654-8e3d-ea1396d1b23e">
|
closed
|
2023-07-28T16:47:03Z
|
2023-11-03T21:56:40Z
|
2023-11-03T21:56:40Z
|
severo
|
1,826,642,786 |
feat: ๐ธ reduce the number of DNS requests
| null |
feat: ๐ธ reduce the number of DNS requests:
|
closed
|
2023-07-28T15:35:23Z
|
2023-07-28T15:50:33Z
|
2023-07-28T15:50:31Z
|
severo
|
1,826,627,545 |
Add num_rows_total to /rows response
| null |
Add num_rows_total to /rows response:
|
closed
|
2023-07-28T15:24:32Z
|
2023-07-28T16:16:14Z
|
2023-07-28T16:16:13Z
|
severo
|
1,826,584,792 |
The metrics jobs are lasting to long
|
<img width="306" alt="Capture dโeฬcran 2023-07-28 aฬ 11 01 37" src="https://github.com/huggingface/datasets-server/assets/1676121/0f1c0120-b2e2-4133-b5da-312b7981c4f1">
|
The metrics jobs are lasting to long: <img width="306" alt="Capture dโeฬcran 2023-07-28 aฬ 11 01 37" src="https://github.com/huggingface/datasets-server/assets/1676121/0f1c0120-b2e2-4133-b5da-312b7981c4f1">
|
closed
|
2023-07-28T15:02:16Z
|
2023-07-28T15:53:16Z
|
2023-07-28T15:52:44Z
|
severo
|
1,826,231,444 |
Replace deprecated use_auth_token with token
|
Fix partially #1550.
Requires:
- [x] #1589
|
Replace deprecated use_auth_token with token: Fix partially #1550.
Requires:
- [x] #1589
|
closed
|
2023-07-28T11:04:45Z
|
2023-08-08T15:20:32Z
|
2023-08-08T15:20:31Z
|
albertvillanova
|
1,826,191,548 |
Update datasets dependency to 2.14.1 version
|
Fix #1589.
Fix partially #1550.
|
Update datasets dependency to 2.14.1 version: Fix #1589.
Fix partially #1550.
|
closed
|
2023-07-28T10:35:20Z
|
2024-01-26T09:07:40Z
|
2023-08-07T09:07:12Z
|
albertvillanova
|
1,825,271,369 |
/rows should return numTotalRows as /search
|
Needed to implement the search on the Hub
`num_total_rows`
|
/rows should return numTotalRows as /search: Needed to implement the search on the Hub
`num_total_rows`
|
closed
|
2023-07-27T21:48:18Z
|
2023-07-28T16:16:14Z
|
2023-07-28T16:16:14Z
|
severo
|
1,824,991,934 |
Skip real test
|
Related to https://github.com/huggingface/datasets-server/issues/1085.
Temporarily disabling real test for spawning API to unblock external PRs like https://github.com/huggingface/datasets-server/pull/1570
|
Skip real test: Related to https://github.com/huggingface/datasets-server/issues/1085.
Temporarily disabling real test for spawning API to unblock external PRs like https://github.com/huggingface/datasets-server/pull/1570
|
closed
|
2023-07-27T18:58:34Z
|
2023-07-27T19:20:27Z
|
2023-07-27T19:20:26Z
|
AndreaFrancis
|
1,824,713,758 |
Fix dev admin auth
|
was needed to use the admin endpoint locally in dev mode
|
Fix dev admin auth: was needed to use the admin endpoint locally in dev mode
|
closed
|
2023-07-27T16:09:35Z
|
2023-07-27T16:36:04Z
|
2023-07-27T16:17:29Z
|
lhoestq
|
1,824,713,551 |
Use cached features in /rows
|
/rows needs the cached `features` since they're not always available in the parquet metadata.
This was causing some `Image` columns to be seen as a struct of binary data, which are not supported in the viewer (shown as "null").
Therefore I'm now passing the `features` from `config-parquet-and-info` to `config-parquet` and then to `config-parquet-metadata`. I kept it backward compatible in case a cached value doesn't have this field yet.
Therefore there's no need for a mongo migration. We can just re-run all the `config-parquet` and `config-parquet-metadata` jobs. I incremented their versions.
close https://github.com/huggingface/datasets-server/issues/1421
|
Use cached features in /rows: /rows needs the cached `features` since they're not always available in the parquet metadata.
This was causing some `Image` columns to be seen as a struct of binary data, which are not supported in the viewer (shown as "null").
Therefore I'm now passing the `features` from `config-parquet-and-info` to `config-parquet` and then to `config-parquet-metadata`. I kept it backward compatible in case a cached value doesn't have this field yet.
Therefore there's no need for a mongo migration. We can just re-run all the `config-parquet` and `config-parquet-metadata` jobs. I incremented their versions.
close https://github.com/huggingface/datasets-server/issues/1421
|
closed
|
2023-07-27T16:09:28Z
|
2023-07-28T12:42:23Z
|
2023-07-28T12:42:22Z
|
lhoestq
|
1,824,548,414 |
feat: ๐ธ reduce overcommitment for rows service
|
it will reduce the number of crashes due to missing RAM
|
feat: ๐ธ reduce overcommitment for rows service: it will reduce the number of crashes due to missing RAM
|
closed
|
2023-07-27T14:51:19Z
|
2023-07-27T14:51:49Z
|
2023-07-27T14:51:29Z
|
severo
|
1,824,289,506 |
Fix torch on macos
|
This fixes local deployment using docker compose on macos
fixes this error when building the `worker` docker image
```
> [stage-0 13/13] RUN --mount=type=cache,target=/home/.cache/pypoetry/cache --mount=type=cache,target=/home/.cache/pypoetry/artifacts poetry install --no-root:
#17 0.533 Creating virtualenv worker in /src/services/worker/.venv
#17 0.852 Installing dependencies from lock file
...
#17 10.40 AssertionError
#17 10.40
#17 10.40
#17 10.40
#17 10.40 at /usr/local/lib/python3.9/site-packages/poetry/installation/executor.py:742 in _download_link
#17 10.41 738โ # to the original archive.
#17 10.41 739โ archive = self._chef.get_cached_archive_for_link(link, strict=False)
#17 10.41 740โ # 'archive' can at this point never be None. Since we previously downloaded
#17 10.41 741โ # an archive, we now should have something cached that we can use here
#17 10.41 โ 742โ assert archive is not None
#17 10.41 743โ
#17 10.41 744โ if archive.suffix != ".whl":
#17 10.41 745โ message = (
#17 10.41 746โ f" โข {self.get_operation_message(operation)}:"
```
|
Fix torch on macos: This fixes local deployment using docker compose on macos
fixes this error when building the `worker` docker image
```
> [stage-0 13/13] RUN --mount=type=cache,target=/home/.cache/pypoetry/cache --mount=type=cache,target=/home/.cache/pypoetry/artifacts poetry install --no-root:
#17 0.533 Creating virtualenv worker in /src/services/worker/.venv
#17 0.852 Installing dependencies from lock file
...
#17 10.40 AssertionError
#17 10.40
#17 10.40
#17 10.40
#17 10.40 at /usr/local/lib/python3.9/site-packages/poetry/installation/executor.py:742 in _download_link
#17 10.41 738โ # to the original archive.
#17 10.41 739โ archive = self._chef.get_cached_archive_for_link(link, strict=False)
#17 10.41 740โ # 'archive' can at this point never be None. Since we previously downloaded
#17 10.41 741โ # an archive, we now should have something cached that we can use here
#17 10.41 โ 742โ assert archive is not None
#17 10.41 743โ
#17 10.41 744โ if archive.suffix != ".whl":
#17 10.41 745โ message = (
#17 10.41 746โ f" โข {self.get_operation_message(operation)}:"
```
|
closed
|
2023-07-27T12:39:51Z
|
2023-07-27T16:47:48Z
|
2023-07-27T16:47:47Z
|
lhoestq
|
1,823,390,707 |
remove redundant indices
|
Removal of redundant indices from simple_cache.py.
|
remove redundant indices: Removal of redundant indices from simple_cache.py.
|
closed
|
2023-07-27T00:15:52Z
|
2023-08-01T19:21:59Z
|
2023-07-31T13:18:47Z
|
geethika-123
|
1,822,741,480 |
feat: ๐ธ update the modification date of root dataset dir
|
in cached assets. It will help delete old directories.
|
feat: ๐ธ update the modification date of root dataset dir: in cached assets. It will help delete old directories.
|
closed
|
2023-07-26T16:08:11Z
|
2023-07-26T20:06:43Z
|
2023-07-26T20:06:42Z
|
severo
|
1,822,651,285 |
feat: ๐ธ only issues with label P0, P1 or P2 cannot be stale
| null |
feat: ๐ธ only issues with label P0, P1 or P2 cannot be stale:
|
closed
|
2023-07-26T15:22:01Z
|
2023-07-26T15:22:15Z
|
2023-07-26T15:22:14Z
|
severo
|
1,822,580,246 |
Fix flaky executor test on long jobs
|
Sometimes the executor doesn't have a chance to kill the long job before finishing
close https://github.com/huggingface/datasets-server/issues/1156
|
Fix flaky executor test on long jobs: Sometimes the executor doesn't have a chance to kill the long job before finishing
close https://github.com/huggingface/datasets-server/issues/1156
|
closed
|
2023-07-26T14:43:33Z
|
2023-07-26T15:28:17Z
|
2023-07-26T15:28:16Z
|
lhoestq
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.