id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
โ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,757,081,951 | Blocked dataset is still process in parquet-and-info | for example Muennighoff/flores200 is blocked but I can see in the logs
lots of
```
INFO: 2023-06-14 14:42:41,256 - root - [config-parquet-and-info] compute JobManager(job_id=6488f35abeb3502eb803738b dataset=Muennighoff/flores200 job_info={'job_id': '6488f35abeb3502eb803738b', 'type': 'config-parquet-and-info', 'params': {'dataset': 'Muennighoff/flores200', 'revision': 'c43c9e247f96a99e813ab6f406b531544be4d77b', 'config': 'fin_Latn-yue_Hant', 'split': None}, 'priority': <Priority.LOW: 'low'>}
```
and lots of
```
INFO: 2023-06-14 14:42:41,815 - root - Killing zombie. Job info = {'job_id': '6488f35abeb3502eb8036d71', 'type': 'config-parquet-and-info', 'params': {'dataset': 'Muennighoff/flores200', 'revision': 'c43c9e247f96a99e813ab6f406b531544be4d77b', 'config': 'eng_Latn-guj_Gujr', 'split': None}, 'priority': <Priority.LOW: 'low'>}
``` | Blocked dataset is still process in parquet-and-info: for example Muennighoff/flores200 is blocked but I can see in the logs
lots of
```
INFO: 2023-06-14 14:42:41,256 - root - [config-parquet-and-info] compute JobManager(job_id=6488f35abeb3502eb803738b dataset=Muennighoff/flores200 job_info={'job_id': '6488f35abeb3502eb803738b', 'type': 'config-parquet-and-info', 'params': {'dataset': 'Muennighoff/flores200', 'revision': 'c43c9e247f96a99e813ab6f406b531544be4d77b', 'config': 'fin_Latn-yue_Hant', 'split': None}, 'priority': <Priority.LOW: 'low'>}
```
and lots of
```
INFO: 2023-06-14 14:42:41,815 - root - Killing zombie. Job info = {'job_id': '6488f35abeb3502eb8036d71', 'type': 'config-parquet-and-info', 'params': {'dataset': 'Muennighoff/flores200', 'revision': 'c43c9e247f96a99e813ab6f406b531544be4d77b', 'config': 'eng_Latn-guj_Gujr', 'split': None}, 'priority': <Priority.LOW: 'low'>}
``` | closed | 2023-06-14T14:46:20Z | 2024-02-06T14:37:41Z | 2024-02-06T14:37:40Z | lhoestq |
1,756,770,552 | Dataset viewer issue for bigscience/P3 | Same to #960
<img width="1024" alt="image" src="https://github.com/huggingface/datasets-server/assets/18626699/12590e16-db21-443d-8ddc-ec6f9bf0ac41">
| Dataset viewer issue for bigscience/P3: Same to #960
<img width="1024" alt="image" src="https://github.com/huggingface/datasets-server/assets/18626699/12590e16-db21-443d-8ddc-ec6f9bf0ac41">
| closed | 2023-06-14T12:20:36Z | 2023-08-11T15:20:26Z | 2023-08-11T15:20:19Z | VoiceBeer |
1,756,479,455 | feat: ๐ธ run jobs killing routines with some random delay | instead of running the zombie killing routine, and the long jobs killing routine on a regular basis, we run them between 50% and 150% of that delay. Fixes #1330 | feat: ๐ธ run jobs killing routines with some random delay: instead of running the zombie killing routine, and the long jobs killing routine on a regular basis, we run them between 50% and 150% of that delay. Fixes #1330 | closed | 2023-06-14T09:41:27Z | 2023-06-14T11:49:06Z | 2023-06-14T11:49:04Z | severo |
1,756,430,656 | ci: ๐ก don't run doc CI if no doc has been changed | fixes #1352 | ci: ๐ก don't run doc CI if no doc has been changed: fixes #1352 | closed | 2023-06-14T09:13:44Z | 2023-06-14T09:41:18Z | 2023-06-14T09:40:53Z | severo |
1,756,423,410 | style: ๐ fix style for github action files | null | style: ๐ fix style for github action files: | closed | 2023-06-14T09:10:21Z | 2023-06-14T09:26:27Z | 2023-06-14T09:25:59Z | severo |
1,756,281,915 | Increase PIL limit for images size | See https://huggingface.co/datasets/datadrivenscience/ship-detection/discussions/4#6489674ff4d5239fac762f2a
PIL has a limit for the image size (178,956,970 pixels). When the datasets library tries to get the value for the big image, it raises `DecompressionBombError`
https://github.com/huggingface/datasets-server/blob/7b4b78f04473d8f00e7c2226381982a9a84ab48c/services/worker/src/worker/utils.py#L341
<strike>I'm not sure if we could handle this better (ie: ignoring the error just for these images, replacing with a placeholder?)</strike>
- [ ] increase the limit
- [ ] catch the exception and return a proper advice for the user
wdyt @huggingface/datasets? | Increase PIL limit for images size: See https://huggingface.co/datasets/datadrivenscience/ship-detection/discussions/4#6489674ff4d5239fac762f2a
PIL has a limit for the image size (178,956,970 pixels). When the datasets library tries to get the value for the big image, it raises `DecompressionBombError`
https://github.com/huggingface/datasets-server/blob/7b4b78f04473d8f00e7c2226381982a9a84ab48c/services/worker/src/worker/utils.py#L341
<strike>I'm not sure if we could handle this better (ie: ignoring the error just for these images, replacing with a placeholder?)</strike>
- [ ] increase the limit
- [ ] catch the exception and return a proper advice for the user
wdyt @huggingface/datasets? | closed | 2023-06-14T07:52:14Z | 2023-06-19T11:21:41Z | 2023-06-19T11:21:41Z | severo |
1,755,737,349 | feat: ๐ธ add resources for temporary load | See https://github.com/huggingface/datasets-server/issues/1301#issuecomment-1590120816 | feat: ๐ธ add resources for temporary load: See https://github.com/huggingface/datasets-server/issues/1301#issuecomment-1590120816 | closed | 2023-06-13T22:26:55Z | 2023-06-13T22:31:21Z | 2023-06-13T22:27:15Z | severo |
1,755,601,448 | Refactor: use "Document" suffix for classes that inherit from mongoengine's Document | ie.
- `Job` -> `JobDocument`
- `CachedResponse` -> `CachedResponseDocument`
- `JobTotalMetric` -> `JobTotalMetricDocument`
- `CacheTotalMetric` -> `CacheTotalMetricDocument`
The idea is to clarify that the class allows manipulating the DB directly and to let the non-suffixed names be free for use as a more high-level class.
It's a nit - easy and useful, but not urgent | Refactor: use "Document" suffix for classes that inherit from mongoengine's Document: ie.
- `Job` -> `JobDocument`
- `CachedResponse` -> `CachedResponseDocument`
- `JobTotalMetric` -> `JobTotalMetricDocument`
- `CacheTotalMetric` -> `CacheTotalMetricDocument`
The idea is to clarify that the class allows manipulating the DB directly and to let the non-suffixed names be free for use as a more high-level class.
It's a nit - easy and useful, but not urgent | closed | 2023-06-13T20:29:05Z | 2023-07-01T15:49:34Z | 2023-07-01T15:49:34Z | severo |
1,755,486,225 | Avoid running jobs twice when step depends on "parallel" jobs | Any time a step depends on two "parallel" steps, the job runs twice, even if the previous one is already computed.
See https://github.com/huggingface/datasets-server/pull/1296#discussion_r1227676712 for context.
Our parallel jobs are:
- "config-split-names-from-streaming"/"config-split-names-from-info" -> dataset-split-names, config-opt-in-out-urls-count and potentially split-duckdb-index" from https://github.com/huggingface/datasets-server/pull/1296
- "split-first-rows-from-parquet" / "split-first-rows-from-streaming" -> split-image-url-columns, dataset-is-valid
We should somehow avoid running the job twice if the result has already been computed for a given revision and job runner version is the same as the current. | Avoid running jobs twice when step depends on "parallel" jobs: Any time a step depends on two "parallel" steps, the job runs twice, even if the previous one is already computed.
See https://github.com/huggingface/datasets-server/pull/1296#discussion_r1227676712 for context.
Our parallel jobs are:
- "config-split-names-from-streaming"/"config-split-names-from-info" -> dataset-split-names, config-opt-in-out-urls-count and potentially split-duckdb-index" from https://github.com/huggingface/datasets-server/pull/1296
- "split-first-rows-from-parquet" / "split-first-rows-from-streaming" -> split-image-url-columns, dataset-is-valid
We should somehow avoid running the job twice if the result has already been computed for a given revision and job runner version is the same as the current. | closed | 2023-06-13T19:01:16Z | 2024-02-23T09:56:19Z | 2024-02-23T09:56:19Z | AndreaFrancis |
1,755,116,082 | Add lock for queue based on mongodb | Example of usage:
```python
with lock(key=f"{job.type}--{job.dataset}", job_id=job.pk):
...
```
or
```python
try:
lock(key=f"{job.type}--{job.dataset}", job_id=job.pk).acquire()
except TimeoutError:
...
```
### Implementation details
- I use the atomicity of `.update()` in mongoengine.
- I defined a new collection "locks" in the "queue" db
- I added created_at and updated_at in case we want to define a TTL for it later
Close https://github.com/huggingface/datasets-server/issues/1356 | Add lock for queue based on mongodb: Example of usage:
```python
with lock(key=f"{job.type}--{job.dataset}", job_id=job.pk):
...
```
or
```python
try:
lock(key=f"{job.type}--{job.dataset}", job_id=job.pk).acquire()
except TimeoutError:
...
```
### Implementation details
- I use the atomicity of `.update()` in mongoengine.
- I defined a new collection "locks" in the "queue" db
- I added created_at and updated_at in case we want to define a TTL for it later
Close https://github.com/huggingface/datasets-server/issues/1356 | closed | 2023-06-13T15:09:01Z | 2023-06-14T14:53:47Z | 2023-06-14T14:53:46Z | lhoestq |
1,754,899,862 | Locking mechanism | Certain jobs like config-parquet-and-info write to a repository and may have concurrency issues (see e.g. https://github.com/huggingface/datasets-server/issues/1349)
For config-parquet-and-info we need to lock the job at the dataset level, so that no other job of the same type for the same dataset write at the same time.
This can be implemented using mongodb | Locking mechanism: Certain jobs like config-parquet-and-info write to a repository and may have concurrency issues (see e.g. https://github.com/huggingface/datasets-server/issues/1349)
For config-parquet-and-info we need to lock the job at the dataset level, so that no other job of the same type for the same dataset write at the same time.
This can be implemented using mongodb | closed | 2023-06-13T13:28:09Z | 2023-06-14T14:53:47Z | 2023-06-14T14:53:47Z | lhoestq |
1,753,404,692 | Multi commit Parquet copies | close https://github.com/huggingface/datasets-server/issues/1349 | Multi commit Parquet copies: close https://github.com/huggingface/datasets-server/issues/1349 | closed | 2023-06-12T19:12:46Z | 2023-06-16T12:59:03Z | 2023-06-16T12:59:01Z | lhoestq |
1,753,204,961 | [docs] Separate page for each library | Implements #1269 | [docs] Separate page for each library: Implements #1269 | closed | 2023-06-12T17:17:02Z | 2023-06-12T21:14:23Z | 2023-06-12T18:40:43Z | stevhliu |
1,752,846,332 | refactor: ๐ก rename required_by_dataset_viewer: enables_preview | Change the response of `/valid`, from:
```json
{
"valid": ["datasets", "that", "have", "/first-rows"]
}
```
to
```json
{
"preview": ["datasets", "that", "only", "have", "/first-rows"],
"viewer": ["datasets", "that", "have", "/rows", "and", "/size"],
"valid": ["preview", "and", "viewer"]
}
```
Note that `valid` is the same as before, and will be removed in a future PR, once the Hub has been updated to use "preview" and "viewer".
### Notes
I propose to first try this. If it works, we will remove the `valid` field, update the docs (https://huggingface.co/docs/datasets-server/valid) and do the same switch for `/is-valid`.
| refactor: ๐ก rename required_by_dataset_viewer: enables_preview: Change the response of `/valid`, from:
```json
{
"valid": ["datasets", "that", "have", "/first-rows"]
}
```
to
```json
{
"preview": ["datasets", "that", "only", "have", "/first-rows"],
"viewer": ["datasets", "that", "have", "/rows", "and", "/size"],
"valid": ["preview", "and", "viewer"]
}
```
Note that `valid` is the same as before, and will be removed in a future PR, once the Hub has been updated to use "preview" and "viewer".
### Notes
I propose to first try this. If it works, we will remove the `valid` field, update the docs (https://huggingface.co/docs/datasets-server/valid) and do the same switch for `/is-valid`.
| closed | 2023-06-12T14:06:29Z | 2023-06-12T16:29:07Z | 2023-06-12T16:29:06Z | severo |
1,752,711,632 | only run doc CI if docs/ files are modified. | null | only run doc CI if docs/ files are modified.: | closed | 2023-06-12T12:57:45Z | 2023-06-14T09:48:40Z | 2023-06-14T09:40:54Z | severo |
1,752,701,869 | feat: ๐ธ upgrade duckdb and gradio | (fixes a vulnerability) | feat: ๐ธ upgrade duckdb and gradio: (fixes a vulnerability) | closed | 2023-06-12T12:52:35Z | 2023-06-12T12:56:58Z | 2023-06-12T12:56:29Z | severo |
1,752,694,813 | Trigger admin app space redeploy in CI | Currently it's done manually which might lead to the graph plot being not synced with the production graph version.
We can do this [as described here](https://huggingface.co/docs/hub/spaces-github-actions) I hope. | Trigger admin app space redeploy in CI: Currently it's done manually which might lead to the graph plot being not synced with the production graph version.
We can do this [as described here](https://huggingface.co/docs/hub/spaces-github-actions) I hope. | open | 2023-06-12T12:48:28Z | 2023-08-07T15:50:46Z | null | polinaeterna |
1,752,397,831 | Can't copy parquet files when there are too many of them | right now on https://huggingface.co/datasets/tiiuae/falcon-refinedweb the `parquet-and-info` job raises
```python
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response.raise_for_status()
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 413 Client Error: Payload Too Large for url: https://huggingface.co/api/datasets/tiiuae/falcon-refinedweb/paths-info/d4d0c8a489e10bb4fbce947d16811b8b8eb544f5
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_manager.py", line 163, in process
job_result = self.job_runner.compute()
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 931, in compute
compute_config_parquet_and_info_response(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 867, in compute_config_parquet_and_info_response
committer_hf_api.create_commit(
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2687, in create_commit
files_to_copy = fetch_lfs_files_to_copy(
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/_commit_api.py", line 533, in fetch_lfs_files_to_copy
for src_repo_file in src_repo_files:
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2041, in list_files_info
hf_raise_for_status(response)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 301, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 413 Client Error: Payload Too Large for url: https://huggingface.co/api/datasets/tiiuae/falcon-refinedweb/paths-info/d4d0c8a489e10bb4fbce947d16811b8b8eb544f5 (Request ID: Root=1-6486dcf3-4f1942863e61c6e5073aa26e)
too many parameters
```
we might need to update hfh once https://github.com/huggingface/huggingface_hub/issues/1503 is fixed | Can't copy parquet files when there are too many of them: right now on https://huggingface.co/datasets/tiiuae/falcon-refinedweb the `parquet-and-info` job raises
```python
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response.raise_for_status()
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 413 Client Error: Payload Too Large for url: https://huggingface.co/api/datasets/tiiuae/falcon-refinedweb/paths-info/d4d0c8a489e10bb4fbce947d16811b8b8eb544f5
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_manager.py", line 163, in process
job_result = self.job_runner.compute()
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 931, in compute
compute_config_parquet_and_info_response(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 867, in compute_config_parquet_and_info_response
committer_hf_api.create_commit(
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2687, in create_commit
files_to_copy = fetch_lfs_files_to_copy(
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/_commit_api.py", line 533, in fetch_lfs_files_to_copy
for src_repo_file in src_repo_files:
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2041, in list_files_info
hf_raise_for_status(response)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 301, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 413 Client Error: Payload Too Large for url: https://huggingface.co/api/datasets/tiiuae/falcon-refinedweb/paths-info/d4d0c8a489e10bb4fbce947d16811b8b8eb544f5 (Request ID: Root=1-6486dcf3-4f1942863e61c6e5073aa26e)
too many parameters
```
we might need to update hfh once https://github.com/huggingface/huggingface_hub/issues/1503 is fixed | closed | 2023-06-12T10:12:10Z | 2023-06-16T13:11:30Z | 2023-06-16T12:59:03Z | lhoestq |
1,752,353,794 | Fix admin app | - fix the columns of dataset status
- prefill the step name (in a select element) in refresh tab | Fix admin app: - fix the columns of dataset status
- prefill the step name (in a select element) in refresh tab | closed | 2023-06-12T09:51:58Z | 2023-06-12T11:19:30Z | 2023-06-12T11:19:01Z | severo |
1,752,343,807 | Fix parquet-and-info download_size | close https://github.com/huggingface/datasets-server/issues/1346 | Fix parquet-and-info download_size: close https://github.com/huggingface/datasets-server/issues/1346 | closed | 2023-06-12T09:45:31Z | 2023-06-12T10:11:21Z | 2023-06-12T10:10:51Z | lhoestq |
1,752,265,530 | TypeError when trying to copy parquet files | On https://huggingface.co/datasets/bigcode/the-stack, we get:
```json
{
"error": "unsupported operand type(s) for +=: 'NoneType' and 'int'",
"cause_exception": "TypeError",
"cause_message": "unsupported operand type(s) for +=: 'NoneType' and 'int'",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 163, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 931, in compute\n compute_config_parquet_and_info_response(\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in compute_config_parquet_and_info_response\n fill_builder_info(builder, hf_token=hf_token)\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 650, in fill_builder_info\n builder.info.download_size += sum(sizes)\n",
"TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'\n"
]
}
```
Error is here: https://github.com/huggingface/datasets-server/blob/e0eed9c9ad09d57a857e6498c44b76d6bd71ff76/services/worker/src/worker/job_runners/config/parquet_and_info.py#L650 | TypeError when trying to copy parquet files: On https://huggingface.co/datasets/bigcode/the-stack, we get:
```json
{
"error": "unsupported operand type(s) for +=: 'NoneType' and 'int'",
"cause_exception": "TypeError",
"cause_message": "unsupported operand type(s) for +=: 'NoneType' and 'int'",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 163, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 931, in compute\n compute_config_parquet_and_info_response(\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in compute_config_parquet_and_info_response\n fill_builder_info(builder, hf_token=hf_token)\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 650, in fill_builder_info\n builder.info.download_size += sum(sizes)\n",
"TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'\n"
]
}
```
Error is here: https://github.com/huggingface/datasets-server/blob/e0eed9c9ad09d57a857e6498c44b76d6bd71ff76/services/worker/src/worker/job_runners/config/parquet_and_info.py#L650 | closed | 2023-06-12T09:01:34Z | 2023-06-12T10:10:53Z | 2023-06-12T10:10:53Z | severo |
1,752,261,695 | feat: ๐ธ launch a retry for a lot of errors | also increase the resources. Also, fix the admin app | feat: ๐ธ launch a retry for a lot of errors: also increase the resources. Also, fix the admin app | closed | 2023-06-12T08:59:57Z | 2023-06-12T09:52:10Z | 2023-06-12T09:51:40Z | severo |
1,751,929,885 | Update locked cachecontrol version from yanked 0.13.0 to 0.13.1 | Update locked `cachecontrol` version from yanked 0.13.0 to 0.13.1.
Fix #1343. | Update locked cachecontrol version from yanked 0.13.0 to 0.13.1: Update locked `cachecontrol` version from yanked 0.13.0 to 0.13.1.
Fix #1343. | closed | 2023-06-12T05:31:57Z | 2023-06-12T09:25:52Z | 2023-06-12T09:25:24Z | albertvillanova |
1,751,922,241 | The locked version 0.13.0 for cachecontrol is a yanked version | We should update the locked version of `cachecontrol` dependency to 0.13.1 because current locked version 0.13.0 is yanked. See: https://pypi.org/project/CacheControl/0.13.0/ | The locked version 0.13.0 for cachecontrol is a yanked version: We should update the locked version of `cachecontrol` dependency to 0.13.1 because current locked version 0.13.0 is yanked. See: https://pypi.org/project/CacheControl/0.13.0/ | closed | 2023-06-12T05:26:22Z | 2023-06-12T09:25:25Z | 2023-06-12T09:25:25Z | albertvillanova |
1,750,937,659 | feat: ๐ธ give 10x more time for the jobs for cerebras/SlimPajama | null | feat: ๐ธ give 10x more time for the jobs for cerebras/SlimPajama: | closed | 2023-06-10T11:54:18Z | 2023-06-10T11:58:24Z | 2023-06-10T11:54:23Z | severo |
1,750,300,884 | Create missing Jobs when /rows cache does not exists yet | In https://github.com/huggingface/datasets-server/pull/1287, /rows were modified to depend on `parquet_utils` which calls `get_previous_step_or_raise` instead of `get_cache_entry_from_steps`. This implies that if the cache entry does not exist yet, it will throw a `CachedArtifactError` error instead of retrying later + create missing Jobs if needed.
See discussion https://github.com/huggingface/datasets-server/pull/1287#discussion_r1224576658
We should keep consistency in the responses of API level and have the same behavior on /rows as in other endpoints. | Create missing Jobs when /rows cache does not exists yet: In https://github.com/huggingface/datasets-server/pull/1287, /rows were modified to depend on `parquet_utils` which calls `get_previous_step_or_raise` instead of `get_cache_entry_from_steps`. This implies that if the cache entry does not exist yet, it will throw a `CachedArtifactError` error instead of retrying later + create missing Jobs if needed.
See discussion https://github.com/huggingface/datasets-server/pull/1287#discussion_r1224576658
We should keep consistency in the responses of API level and have the same behavior on /rows as in other endpoints. | closed | 2023-06-09T18:05:07Z | 2023-06-26T11:01:37Z | 2023-06-26T11:01:37Z | AndreaFrancis |
1,750,166,012 | [documentation] grammatical fixes in first_rows.mdx | The changes made are grammatical and do not affect the ideas communicated in the file. | [documentation] grammatical fixes in first_rows.mdx: The changes made are grammatical and do not affect the ideas communicated in the file. | closed | 2023-06-09T16:15:36Z | 2023-06-12T18:42:17Z | 2023-06-12T18:42:17Z | LiamSwayne |
1,749,815,515 | Format: the `failed` entries in aggregated steps should have more information | See https://github.com/huggingface/datasets-server/issues/1338
```json
{
"parquet_files": [],
"pending": [],
"failed": [
{
"kind": "config-parquet",
"dataset": "bigcode/the-stack-dedup",
"config": "bigcode--the-stack-dedup",
"split": null
}
]
}
```
returned by https://datasets-server.huggingface.co/parquet?dataset=bigcode%2Fthe-stack-dedup misses information about why the steps have failed.
We should return the error, or at least the error_code | Format: the `failed` entries in aggregated steps should have more information: See https://github.com/huggingface/datasets-server/issues/1338
```json
{
"parquet_files": [],
"pending": [],
"failed": [
{
"kind": "config-parquet",
"dataset": "bigcode/the-stack-dedup",
"config": "bigcode--the-stack-dedup",
"split": null
}
]
}
```
returned by https://datasets-server.huggingface.co/parquet?dataset=bigcode%2Fthe-stack-dedup misses information about why the steps have failed.
We should return the error, or at least the error_code | closed | 2023-06-09T12:41:10Z | 2023-06-09T15:38:51Z | 2023-06-09T15:38:51Z | severo |
1,749,631,085 | Parquet endpoint not returning urls to parquet files for the-stack-dedup dataset. | When calling https://datasets-server.huggingface.co/parquet?dataset=bigcode%2Fthe-stack-dedup to retrieve parquet files for this dataset the response is this:
``` json
{
'parquet_files': [],
'pending': [],
'failed':[{'kind': 'config-parquet',
'dataset': 'bigcode[/the-stack-dedup](https://file+.vscode-resource.vscode-cdn.net/the-stack-dedup)',
'config': 'bigcode--the-stack-dedup',
'split': None
}]
}
```
Other endpoints are working correctly (/is-valid, /splits, /first-rows).
Below is my test code:
```python
import requests
headers = {"Authorization": f"Bearer {'**MY_TOKEN**'}"}
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=bigcode%2Fthe-stack-dedup"
def query():
response = requests.get(API_URL, headers=headers)
return response.json()
data = query()
print(data)
```
What would cause it, other datasets that I tried are working correctly?
| Parquet endpoint not returning urls to parquet files for the-stack-dedup dataset.: When calling https://datasets-server.huggingface.co/parquet?dataset=bigcode%2Fthe-stack-dedup to retrieve parquet files for this dataset the response is this:
``` json
{
'parquet_files': [],
'pending': [],
'failed':[{'kind': 'config-parquet',
'dataset': 'bigcode[/the-stack-dedup](https://file+.vscode-resource.vscode-cdn.net/the-stack-dedup)',
'config': 'bigcode--the-stack-dedup',
'split': None
}]
}
```
Other endpoints are working correctly (/is-valid, /splits, /first-rows).
Below is my test code:
```python
import requests
headers = {"Authorization": f"Bearer {'**MY_TOKEN**'}"}
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=bigcode%2Fthe-stack-dedup"
def query():
response = requests.get(API_URL, headers=headers)
return response.json()
data = query()
print(data)
```
What would cause it, other datasets that I tried are working correctly?
| closed | 2023-06-09T10:34:10Z | 2023-07-01T15:58:28Z | 2023-07-01T15:58:20Z | deadmau5p |
1,749,404,103 | [doc build] Use secrets | Companion pr to https://github.com/huggingface/doc-builder/pull/379
Please feel free to merge it yourself | [doc build] Use secrets: Companion pr to https://github.com/huggingface/doc-builder/pull/379
Please feel free to merge it yourself | closed | 2023-06-09T08:27:27Z | 2023-06-09T15:30:53Z | 2023-06-09T15:30:52Z | mishig25 |
1,748,595,825 | feat: ๐ธ upgrade transformers and remove exception | fixes #1333 | feat: ๐ธ upgrade transformers and remove exception: fixes #1333 | closed | 2023-06-08T20:34:15Z | 2023-06-08T21:24:10Z | 2023-06-08T21:24:09Z | severo |
1,748,591,072 | feat: ๐ธ remove limit on number of started jobs per namespace | It's not needed anymore. Let's simplify the code | feat: ๐ธ remove limit on number of started jobs per namespace: It's not needed anymore. Let's simplify the code | closed | 2023-06-08T20:29:30Z | 2023-06-08T20:39:13Z | 2023-06-08T20:39:11Z | severo |
1,748,557,679 | feat: ๐ธ remove the last four blocked datasets | since the issue (infinite loop when fetching the commits) has been fixed in the Hub API
See https://github.com/huggingface/moon-landing/pull/6572 (internal) | feat: ๐ธ remove the last four blocked datasets: since the issue (infinite loop when fetching the commits) has been fixed in the Hub API
See https://github.com/huggingface/moon-landing/pull/6572 (internal) | closed | 2023-06-08T20:04:35Z | 2023-06-08T20:05:05Z | 2023-06-08T20:04:58Z | severo |
1,748,532,757 | upgrade transformers to remove vulnerability | https://github.com/huggingface/transformers/releases/tag/v4.30.0 | upgrade transformers to remove vulnerability: https://github.com/huggingface/transformers/releases/tag/v4.30.0 | closed | 2023-06-08T19:48:36Z | 2023-06-08T21:24:10Z | 2023-06-08T21:24:10Z | severo |
1,748,077,115 | Generate random cache subdirectory for dataset job runner | Generate random cache subdirectory for dataset job runner.
This might fix the stale file handle error:
```
OSError: [Errno 116] Stale file handle
```
See: https://huggingface.co/datasets/tapaco/discussions/4 | Generate random cache subdirectory for dataset job runner: Generate random cache subdirectory for dataset job runner.
This might fix the stale file handle error:
```
OSError: [Errno 116] Stale file handle
```
See: https://huggingface.co/datasets/tapaco/discussions/4 | closed | 2023-06-08T14:55:34Z | 2023-06-08T15:31:55Z | 2023-06-08T15:28:55Z | albertvillanova |
1,748,046,060 | feat: ๐ธ reduce resources | we don't need more for now | feat: ๐ธ reduce resources: we don't need more for now | closed | 2023-06-08T14:38:29Z | 2023-06-08T14:38:59Z | 2023-06-08T14:38:34Z | severo |
1,747,947,228 | Use randomness in zombie+long jobs killing process? | Every worker runs the function to kill zombies every 10 minutes. But as all the workers are launch about at the same, it's pretty strange: nothing happens during 10 minutes, then 80 workers try to kill the zombies.
Same for the long jobs.
The interval between run could be random between 5 and 15 minutes, for example. | Use randomness in zombie+long jobs killing process?: Every worker runs the function to kill zombies every 10 minutes. But as all the workers are launch about at the same, it's pretty strange: nothing happens during 10 minutes, then 80 workers try to kill the zombies.
Same for the long jobs.
The interval between run could be random between 5 and 15 minutes, for example. | closed | 2023-06-08T13:56:13Z | 2023-06-14T11:49:06Z | 2023-06-14T11:49:06Z | severo |
1,747,834,418 | Remove QUEUE_MAX_JOBS_PER_NAMESPACE? | I think that we don't need it anymore, and that removing it would reduce the load on mongo (a bit). See https://github.com/huggingface/datasets-server/pull/1328#issue-1747833163 | Remove QUEUE_MAX_JOBS_PER_NAMESPACE?: I think that we don't need it anymore, and that removing it would reduce the load on mongo (a bit). See https://github.com/huggingface/datasets-server/pull/1328#issue-1747833163 | closed | 2023-06-08T12:49:49Z | 2023-06-09T15:18:50Z | 2023-06-09T15:18:49Z | severo |
1,747,833,163 | feat: ๐ธ disable the limit per namespace | (only increased to 1000, which is a lot more than the max number of started jobs). I think that we don't need this restriction anymore, because the jobs are limited in time, and the datasets with less started job are already prioritized. I think that the code could be removed at one point (let's first see how it goes), which would simplify the queries to mongo | feat: ๐ธ disable the limit per namespace: (only increased to 1000, which is a lot more than the max number of started jobs). I think that we don't need this restriction anymore, because the jobs are limited in time, and the datasets with less started job are already prioritized. I think that the code could be removed at one point (let's first see how it goes), which would simplify the queries to mongo | closed | 2023-06-08T12:48:58Z | 2023-06-08T12:49:27Z | 2023-06-08T12:49:05Z | severo |
1,747,366,656 | feat: ๐ธ block 4 datasets for sil-ai | we temporarily block 4 more datasets until investigating more, since they generate too much errors and have a lot of configs. | feat: ๐ธ block 4 datasets for sil-ai: we temporarily block 4 more datasets until investigating more, since they generate too much errors and have a lot of configs. | closed | 2023-06-08T08:27:59Z | 2023-06-08T08:28:27Z | 2023-06-08T08:28:04Z | severo |
1,746,829,743 | MigrationQueueDeleteTTLIndex throws an exception | Context: https://github.com/huggingface/datasets-server/pull/1238#discussion_r1202804263
While running migrations of type MigrationQueueDeleteTTLIndex, an error is thrown:
`pymongo.errors.OperationFailure: An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 600 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 86400 }, full error: {'ok': 0.0, 'errmsg': 'An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 600 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 86400 }', 'code': 85, 'codeName': 'IndexOptionsConflict', '$clusterTime': {'clusterTime': Timestamp(1684856718, 127), 'signature': {'hash': b'\x9e\x18\x11\xa9\xfd\x9c\xec\x12\x19\xeb\xa7\xad\x94\xfb\x86<Y\xdc\xb4n', 'keyId': 7184422908209397768}}, 'operationTime': Timestamp(1684856718, 127)}`
See also https://huggingface.slack.com/archives/C04L6P8KNQ5/p1684856801355259?thread_ts=1684855715.011289&cid=C04L6P8KNQ5 (internal) for details.
Maybe this is related to other k8s objects running at the same time of deploy and try to re create the old index. Should we stop all pods while deploying? Maybe scale to 0 replicas? | MigrationQueueDeleteTTLIndex throws an exception: Context: https://github.com/huggingface/datasets-server/pull/1238#discussion_r1202804263
While running migrations of type MigrationQueueDeleteTTLIndex, an error is thrown:
`pymongo.errors.OperationFailure: An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 600 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 86400 }, full error: {'ok': 0.0, 'errmsg': 'An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 600 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 86400 }', 'code': 85, 'codeName': 'IndexOptionsConflict', '$clusterTime': {'clusterTime': Timestamp(1684856718, 127), 'signature': {'hash': b'\x9e\x18\x11\xa9\xfd\x9c\xec\x12\x19\xeb\xa7\xad\x94\xfb\x86<Y\xdc\xb4n', 'keyId': 7184422908209397768}}, 'operationTime': Timestamp(1684856718, 127)}`
See also https://huggingface.slack.com/archives/C04L6P8KNQ5/p1684856801355259?thread_ts=1684855715.011289&cid=C04L6P8KNQ5 (internal) for details.
Maybe this is related to other k8s objects running at the same time of deploy and try to re create the old index. Should we stop all pods while deploying? Maybe scale to 0 replicas? | closed | 2023-06-07T23:40:48Z | 2023-06-15T21:05:52Z | 2023-06-15T21:05:37Z | AndreaFrancis |
1,746,600,148 | Adding condition to Jobs Collection - ttl index | Context: https://github.com/huggingface/datasets-server/issues/1323
Some records in the queue have `finished_at` field but their final status is different than success/error, we should not have these scenarios.
Adding a condition to the TTL index will let us identify potential issues without losing data for troubleshooting.
Second part of https://github.com/huggingface/datasets-server/issues/1326
Depends on https://github.com/huggingface/datasets-server/pull/1378 | Adding condition to Jobs Collection - ttl index: Context: https://github.com/huggingface/datasets-server/issues/1323
Some records in the queue have `finished_at` field but their final status is different than success/error, we should not have these scenarios.
Adding a condition to the TTL index will let us identify potential issues without losing data for troubleshooting.
Second part of https://github.com/huggingface/datasets-server/issues/1326
Depends on https://github.com/huggingface/datasets-server/pull/1378 | closed | 2023-06-07T20:11:18Z | 2023-06-15T22:32:23Z | 2023-06-15T22:32:22Z | AndreaFrancis |
1,746,589,600 | feat: ๐ธ reduce the number of workers | also: increase the number of started jobs per namespace, because the remaining jobs are only for a small number of datasets | feat: ๐ธ reduce the number of workers: also: increase the number of started jobs per namespace, because the remaining jobs are only for a small number of datasets | closed | 2023-06-07T20:04:06Z | 2023-06-07T20:04:39Z | 2023-06-07T20:04:25Z | severo |
1,746,279,513 | The number of started jobs is a lot bigger than the number of workers | We currently have nearly 2000 started jobs, and "only" 200 workers
<img width="389" alt="Capture dโeฬcran 2023-06-07 aฬ 18 27 10" src="https://github.com/huggingface/datasets-server/assets/1676121/df48b775-b423-4d99-af59-7117355685c8">
My guess: the jobs raise an uncaught exception and are never finished properly in the database. They will be deleted afterwards anyway (zombie killer) but possibly we must fix something here. | The number of started jobs is a lot bigger than the number of workers: We currently have nearly 2000 started jobs, and "only" 200 workers
<img width="389" alt="Capture dโeฬcran 2023-06-07 aฬ 18 27 10" src="https://github.com/huggingface/datasets-server/assets/1676121/df48b775-b423-4d99-af59-7117355685c8">
My guess: the jobs raise an uncaught exception and are never finished properly in the database. They will be deleted afterwards anyway (zombie killer) but possibly we must fix something here. | closed | 2023-06-07T16:29:39Z | 2023-06-26T15:37:28Z | 2023-06-26T15:37:28Z | severo |
1,746,250,249 | The config-parquet-and-info step generates empty commits | Maybe related to https://github.com/huggingface/datasets-server/issues/1308
<img width="764" alt="Capture dโeฬcran 2023-06-07 aฬ 18 10 01" src="https://github.com/huggingface/datasets-server/assets/1676121/3627b3f4-54dd-4713-a60b-f985123337c8">
<img width="1552" alt="Capture dโeฬcran 2023-06-07 aฬ 18 10 08" src="https://github.com/huggingface/datasets-server/assets/1676121/9632b8a9-d25d-43b6-a0f0-0740a8da7328">
| The config-parquet-and-info step generates empty commits: Maybe related to https://github.com/huggingface/datasets-server/issues/1308
<img width="764" alt="Capture dโeฬcran 2023-06-07 aฬ 18 10 01" src="https://github.com/huggingface/datasets-server/assets/1676121/3627b3f4-54dd-4713-a60b-f985123337c8">
<img width="1552" alt="Capture dโeฬcran 2023-06-07 aฬ 18 10 08" src="https://github.com/huggingface/datasets-server/assets/1676121/9632b8a9-d25d-43b6-a0f0-0740a8da7328">
| closed | 2023-06-07T16:10:20Z | 2024-06-19T13:59:44Z | 2024-06-19T13:59:44Z | severo |
1,746,088,944 | Copy parquet when possible | close https://github.com/huggingface/datasets-server/issues/1273
I use CommitOperationCopy from the huggingface_hub library, only available on `main` for now.
I don't do any check on the parquet row groups size but I think it's fine for now.
related to https://github.com/huggingface/datasets/pull/5935 in `datasets` | Copy parquet when possible: close https://github.com/huggingface/datasets-server/issues/1273
I use CommitOperationCopy from the huggingface_hub library, only available on `main` for now.
I don't do any check on the parquet row groups size but I think it's fine for now.
related to https://github.com/huggingface/datasets/pull/5935 in `datasets` | closed | 2023-06-07T14:50:34Z | 2023-06-09T17:41:14Z | 2023-06-09T17:41:12Z | lhoestq |
1,745,927,094 | fix refresh in admin ui | following https://github.com/huggingface/datasets-server/pull/1264 | fix refresh in admin ui: following https://github.com/huggingface/datasets-server/pull/1264 | closed | 2023-06-07T13:34:10Z | 2023-06-07T14:13:27Z | 2023-06-07T14:10:24Z | lhoestq |
1,745,697,630 | feat: ๐ธ increase the resources even more | also: restore a bit more replicas for the light job runners, because we have a lot of waiting jobs for them now. | feat: ๐ธ increase the resources even more: also: restore a bit more replicas for the light job runners, because we have a lot of waiting jobs for them now. | closed | 2023-06-07T11:38:34Z | 2023-06-07T11:42:14Z | 2023-06-07T11:38:39Z | severo |
1,745,501,962 | feat: ๐ธ increase resources again | and restore the number of resources for the "light" jobs (all, but 6) that never have pending jobs | feat: ๐ธ increase resources again: and restore the number of resources for the "light" jobs (all, but 6) that never have pending jobs | closed | 2023-06-07T09:53:41Z | 2023-06-07T09:56:58Z | 2023-06-07T09:53:46Z | severo |
1,745,276,582 | feat: ๐ธ increase the resources again to fflush the queue | null | feat: ๐ธ increase the resources again to fflush the queue: | closed | 2023-06-07T07:47:40Z | 2023-06-07T07:51:15Z | 2023-06-07T07:47:46Z | severo |
1,745,252,859 | Fix missing word and typo in parquet_process docs | Minor fix in docs. | Fix missing word and typo in parquet_process docs: Minor fix in docs. | closed | 2023-06-07T07:32:15Z | 2023-06-07T15:26:57Z | 2023-06-07T15:26:56Z | albertvillanova |
1,745,053,154 | Complete raise if dataset requires manual download | Follow-up of:
- #1309
See: https://github.com/huggingface/datasets-server/issues/1307#issuecomment-1579288660
Fix partially #1307.
EDIT:
TODO as requested by @severo:
- [x] Raise `DatasetManualDownloadError` from `config-split-names-from-streaming`
- [ ] ~~Propagate the error codes to the aggregator steps (config -> dataset, in this case) so that we can show the appropriate message on the Hub~~
- To be done in another PR | Complete raise if dataset requires manual download: Follow-up of:
- #1309
See: https://github.com/huggingface/datasets-server/issues/1307#issuecomment-1579288660
Fix partially #1307.
EDIT:
TODO as requested by @severo:
- [x] Raise `DatasetManualDownloadError` from `config-split-names-from-streaming`
- [ ] ~~Propagate the error codes to the aggregator steps (config -> dataset, in this case) so that we can show the appropriate message on the Hub~~
- To be done in another PR | closed | 2023-06-07T05:13:25Z | 2023-06-07T08:45:41Z | 2023-06-07T08:42:37Z | albertvillanova |
1,744,585,292 | UnexpectedError on /rows endpoint | Both splits of https://huggingface.co/datasets/Anthropic/hh-rlhf consistently return an `UnexpectedError` error for the pages of the dataset viewer (ie. when requesting /rows)
See https://huggingface.co/datasets/Anthropic/hh-rlhf/viewer/Anthropic--hh-rlhf/test?p=1 for example.
<img width="869" alt="Capture dโeฬcran 2023-06-06 aฬ 22 47 33" src="https://github.com/huggingface/datasets-server/assets/1676121/42a7d602-e621-44d7-9b9f-b94512b15377">
| UnexpectedError on /rows endpoint: Both splits of https://huggingface.co/datasets/Anthropic/hh-rlhf consistently return an `UnexpectedError` error for the pages of the dataset viewer (ie. when requesting /rows)
See https://huggingface.co/datasets/Anthropic/hh-rlhf/viewer/Anthropic--hh-rlhf/test?p=1 for example.
<img width="869" alt="Capture dโeฬcran 2023-06-06 aฬ 22 47 33" src="https://github.com/huggingface/datasets-server/assets/1676121/42a7d602-e621-44d7-9b9f-b94512b15377">
| closed | 2023-06-06T20:47:41Z | 2023-07-17T16:39:48Z | 2023-07-16T15:04:19Z | severo |
1,744,372,083 | docs: โ๏ธ fix size of the shards | null | docs: โ๏ธ fix size of the shards: | closed | 2023-06-06T18:27:15Z | 2023-06-06T18:32:15Z | 2023-06-06T18:28:55Z | severo |
1,744,083,239 | [docs] Clarify Parquet doc | Addresses this [comment](https://github.com/huggingface/blog/pull/1177#pullrequestreview-1465226825) to clarify the requirements for generating a Parquet dataset. | [docs] Clarify Parquet doc: Addresses this [comment](https://github.com/huggingface/blog/pull/1177#pullrequestreview-1465226825) to clarify the requirements for generating a Parquet dataset. | closed | 2023-06-06T15:15:18Z | 2023-06-06T15:35:40Z | 2023-06-06T15:32:38Z | stevhliu |
1,744,049,144 | fix: Parallel job runner for split-first-rows-from-parquet | It must be split-first-rows-from-streaming, not config-split-names-from-info | fix: Parallel job runner for split-first-rows-from-parquet: It must be split-first-rows-from-streaming, not config-split-names-from-info | closed | 2023-06-06T14:57:35Z | 2023-06-06T19:06:09Z | 2023-06-06T15:07:53Z | AndreaFrancis |
1,744,030,092 | feat: ๐ธ increasing the number of pods to process the queue | We have long-lasting jobs due to the refresh of all the datasets that had the "GatedExtraFieldsError" (see #1298). | feat: ๐ธ increasing the number of pods to process the queue: We have long-lasting jobs due to the refresh of all the datasets that had the "GatedExtraFieldsError" (see #1298). | closed | 2023-06-06T14:47:26Z | 2023-06-06T14:50:49Z | 2023-06-06T14:47:31Z | severo |
1,743,939,320 | Raise if dataset requires manual download | Do not support datasets that require manual download.
Fix #1307. | Raise if dataset requires manual download: Do not support datasets that require manual download.
Fix #1307. | closed | 2023-06-06T14:00:12Z | 2023-06-06T15:25:56Z | 2023-06-06T15:22:56Z | albertvillanova |
1,743,835,950 | Concurrent creation of parquet files generates errors | See https://huggingface.co/datasets/sil-ai/bloom-vist for example:
<img width="874" alt="Capture dโeฬcran 2023-06-06 aฬ 15 01 11" src="https://github.com/huggingface/datasets-server/assets/1676121/72cb0b13-bb1d-475a-a04c-d9b6f06e3e78">
Half of its configs have no parquet file (thus no number of rows in the select component)
See the details of one of them:
https://datasets-server.huggingface.co/size?dataset=sil-ai/bloom-vist&config=abc
```json
{"error":"412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/sil-ai/bloom-vist/commit/refs%2Fconvert%2Fparquet (Request ID: Root=1-647f0082-12624d7b1d357e88202837b3)\n\nA commit has happened since. Please refresh and try again."}
```
It's because of concurrency issues in the creation of parquet files.
Some ideas to solve this:
1. add this error to the retryable errors (by the way: the exception is not properly caught: `UnexpectedError`), then wait for the next backfill cron job to fix them
2. retry immediately, until it's OK
| Concurrent creation of parquet files generates errors: See https://huggingface.co/datasets/sil-ai/bloom-vist for example:
<img width="874" alt="Capture dโeฬcran 2023-06-06 aฬ 15 01 11" src="https://github.com/huggingface/datasets-server/assets/1676121/72cb0b13-bb1d-475a-a04c-d9b6f06e3e78">
Half of its configs have no parquet file (thus no number of rows in the select component)
See the details of one of them:
https://datasets-server.huggingface.co/size?dataset=sil-ai/bloom-vist&config=abc
```json
{"error":"412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/sil-ai/bloom-vist/commit/refs%2Fconvert%2Fparquet (Request ID: Root=1-647f0082-12624d7b1d357e88202837b3)\n\nA commit has happened since. Please refresh and try again."}
```
It's because of concurrency issues in the creation of parquet files.
Some ideas to solve this:
1. add this error to the retryable errors (by the way: the exception is not properly caught: `UnexpectedError`), then wait for the next backfill cron job to fix them
2. retry immediately, until it's OK
| closed | 2023-06-06T13:06:36Z | 2023-06-28T08:08:49Z | 2023-06-28T08:08:48Z | severo |
1,743,449,867 | Do not support datasets that require manual download | Currently, datasets that require manual download show a weird error message due to a TypeError. See: https://huggingface.co/datasets/timit_asr/discussions/2
```
expected str, bytes or os.PathLike object, not NoneType
```
I think we should not support theses datasets, as we already do with private datasets.
EDIT: TODO:
- [x] Raise `DatasetManualDownloadError` from `config-parquet-and-info`
- [x] #1309
- [x] Raise `DatasetManualDownloadError` from `config-split-names-from-streaming`
- [x] #1315
- [ ] Propagate the error codes to the aggregator steps (config -> dataset, in this case) so that we can show the appropriate message on the Hub | Do not support datasets that require manual download: Currently, datasets that require manual download show a weird error message due to a TypeError. See: https://huggingface.co/datasets/timit_asr/discussions/2
```
expected str, bytes or os.PathLike object, not NoneType
```
I think we should not support theses datasets, as we already do with private datasets.
EDIT: TODO:
- [x] Raise `DatasetManualDownloadError` from `config-parquet-and-info`
- [x] #1309
- [x] Raise `DatasetManualDownloadError` from `config-split-names-from-streaming`
- [x] #1315
- [ ] Propagate the error codes to the aggregator steps (config -> dataset, in this case) so that we can show the appropriate message on the Hub | closed | 2023-06-06T09:22:47Z | 2023-07-16T15:04:20Z | 2023-07-16T15:04:20Z | albertvillanova |
1,743,441,734 | chore: ๐ค fix the name of the dev secret | null | chore: ๐ค fix the name of the dev secret: | closed | 2023-06-06T09:17:29Z | 2023-06-06T09:20:54Z | 2023-06-06T09:17:45Z | severo |
1,742,693,711 | [docs] Fix quickstart | Removes the extra "Access parquet files" section in the quickstart. | [docs] Fix quickstart: Removes the extra "Access parquet files" section in the quickstart. | closed | 2023-06-05T22:04:53Z | 2023-06-06T14:27:12Z | 2023-06-06T08:13:00Z | stevhliu |
1,742,205,792 | The order of the splits in sciq is not correct | https://datasets-server.huggingface.co/splits?dataset=sciq
gives `test / train / validation` (alphabetical)
while we expect `train / validation / test`
See https://huggingface.co/datasets/sciq/blob/main/sciq.py#L67
| The order of the splits in sciq is not correct: https://datasets-server.huggingface.co/splits?dataset=sciq
gives `test / train / validation` (alphabetical)
while we expect `train / validation / test`
See https://huggingface.co/datasets/sciq/blob/main/sciq.py#L67
| closed | 2023-06-05T17:17:19Z | 2023-06-06T07:56:14Z | 2023-06-06T07:19:17Z | severo |
1,741,714,759 | feat: ๐ธ update dependencies | main upgrades: cryptography (fixes a vulnerability that made the CI fail), pyarrow 11 -> 12. Also: mypy 1.1.1 ->1.3.0 (should not affect) | feat: ๐ธ update dependencies: main upgrades: cryptography (fixes a vulnerability that made the CI fail), pyarrow 11 -> 12. Also: mypy 1.1.1 ->1.3.0 (should not affect) | closed | 2023-06-05T12:50:41Z | 2023-06-05T15:59:54Z | 2023-06-05T15:56:10Z | severo |
1,741,436,720 | chore: ๐ค remove obsolete issue template | The Hub does not propose anymore to open an issue in this repo. Thus: no need to keep this issue template, which makes it confusing when opening an issue. | chore: ๐ค remove obsolete issue template: The Hub does not propose anymore to open an issue in this repo. Thus: no need to keep this issue template, which makes it confusing when opening an issue. | closed | 2023-06-05T10:05:49Z | 2023-06-05T12:11:57Z | 2023-06-05T11:21:26Z | severo |
1,741,419,508 | The error shown on the Hub is weird when the it's from a previous error | It does not help at all
<img width="1017" alt="Capture dโeฬcran 2023-06-05 aฬ 11 57 51" src="https://github.com/huggingface/datasets-server/assets/1676121/966905fb-2de7-428c-9583-fb51ee506a36">
```
Traceback: The previous step failed, the error is copied to this step: kind='config-info' dataset='stas/openwebtext-10k' config='plain_text' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='stas/openwebtext-10k' config='plain_text' split=None---
``` | The error shown on the Hub is weird when the it's from a previous error: It does not help at all
<img width="1017" alt="Capture dโeฬcran 2023-06-05 aฬ 11 57 51" src="https://github.com/huggingface/datasets-server/assets/1676121/966905fb-2de7-428c-9583-fb51ee506a36">
```
Traceback: The previous step failed, the error is copied to this step: kind='config-info' dataset='stas/openwebtext-10k' config='plain_text' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='stas/openwebtext-10k' config='plain_text' split=None---
``` | closed | 2023-06-05T09:58:06Z | 2023-06-15T20:56:23Z | 2023-06-15T20:56:23Z | severo |
1,741,412,119 | Don't populate `dataset` field with 0 values if we don't have the information | Instead of showing `num_rows=0` and `num_bytes...=0`, we should make the `dataset` field optional. 0 is a wrong value.
https://datasets-server.huggingface.co/size?dataset=stas/openwebtext-10k
```json
{
"size": {
"dataset": {
"dataset": "stas/openwebtext-10k",
"num_bytes_original_files": 0,
"num_bytes_parquet_files": 0,
"num_bytes_memory": 0,
"num_rows": 0
},
"configs": [],
"splits": []
},
"pending": [],
"failed": [
{
"kind": "config-size",
"dataset": "stas/openwebtext-10k",
"config": "plain_text",
"split": null
}
]
}
```
| Don't populate `dataset` field with 0 values if we don't have the information: Instead of showing `num_rows=0` and `num_bytes...=0`, we should make the `dataset` field optional. 0 is a wrong value.
https://datasets-server.huggingface.co/size?dataset=stas/openwebtext-10k
```json
{
"size": {
"dataset": {
"dataset": "stas/openwebtext-10k",
"num_bytes_original_files": 0,
"num_bytes_parquet_files": 0,
"num_bytes_memory": 0,
"num_rows": 0
},
"configs": [],
"splits": []
},
"pending": [],
"failed": [
{
"kind": "config-size",
"dataset": "stas/openwebtext-10k",
"config": "plain_text",
"split": null
}
]
}
```
| open | 2023-06-05T09:53:47Z | 2023-08-11T15:16:36Z | null | severo |
1,741,387,442 | format error: `failed` (and `pending`) should not have `split` field for config-size | ERROR: type should be string, got "\r\n\r\nhttps://datasets-server.huggingface.co/size?dataset=stas/openwebtext-10k\r\n\r\n```json\r\n{\r\n \"size\": {\r\n \"dataset\": {\r\n \"dataset\": \"stas/openwebtext-10k\",\r\n \"num_bytes_original_files\": 0,\r\n \"num_bytes_parquet_files\": 0,\r\n \"num_bytes_memory\": 0,\r\n \"num_rows\": 0\r\n },\r\n \"configs\": [],\r\n \"splits\": []\r\n },\r\n \"pending\": [],\r\n \"failed\": [\r\n {\r\n \"kind\": \"config-size\",\r\n \"dataset\": \"stas/openwebtext-10k\",\r\n \"config\": \"plain_text\",\r\n \"split\": null\r\n }\r\n ]\r\n}\r\n```" | format error: `failed` (and `pending`) should not have `split` field for config-size:
https://datasets-server.huggingface.co/size?dataset=stas/openwebtext-10k
```json
{
"size": {
"dataset": {
"dataset": "stas/openwebtext-10k",
"num_bytes_original_files": 0,
"num_bytes_parquet_files": 0,
"num_bytes_memory": 0,
"num_rows": 0
},
"configs": [],
"splits": []
},
"pending": [],
"failed": [
{
"kind": "config-size",
"dataset": "stas/openwebtext-10k",
"config": "plain_text",
"split": null
}
]
}
``` | closed | 2023-06-05T09:37:55Z | 2023-08-11T15:24:59Z | 2023-08-11T15:24:58Z | severo |
1,738,250,890 | feat: ๐ธ use app token for commits to the Hub | User token is no more accepted for PARQUET_AND_INFO_COMMITTER_HF_TOKEN. It now must be an app token, associated with a user for the datasets-maintainers org. See details in the README.
BREAKING CHANGE: ๐งจ the PARQUET_AND_INFO_COMMITTER_HF_TOKEN value must be changed.
Note that this PR includes https://github.com/huggingface/datasets-server/pull/1303 (upgrade dependencies, to fix a vulnerability, and adapt the code)
Also: note that we deleted the following error codes: `AskAccessHubRequestError`, `GatedDisabledError` and `GatedExtraFieldsError`: so, we should refresh all the datasets with these errors | feat: ๐ธ use app token for commits to the Hub: User token is no more accepted for PARQUET_AND_INFO_COMMITTER_HF_TOKEN. It now must be an app token, associated with a user for the datasets-maintainers org. See details in the README.
BREAKING CHANGE: ๐งจ the PARQUET_AND_INFO_COMMITTER_HF_TOKEN value must be changed.
Note that this PR includes https://github.com/huggingface/datasets-server/pull/1303 (upgrade dependencies, to fix a vulnerability, and adapt the code)
Also: note that we deleted the following error codes: `AskAccessHubRequestError`, `GatedDisabledError` and `GatedExtraFieldsError`: so, we should refresh all the datasets with these errors | closed | 2023-06-02T13:54:22Z | 2023-06-06T09:09:07Z | 2023-06-06T09:05:59Z | severo |
1,738,184,572 | Dataset Viewer issue for codeparrot/github-code-clean | ### Link
https://huggingface.co/datasets/codeparrot/github-code-clean
### Description
The dataset viewer is not working for dataset codeparrot/github-code-clean.
Error details:
```
Error code: JobRunnerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='codeparrot/github-code-clean' config=None split=None---
```
| Dataset Viewer issue for codeparrot/github-code-clean: ### Link
https://huggingface.co/datasets/codeparrot/github-code-clean
### Description
The dataset viewer is not working for dataset codeparrot/github-code-clean.
Error details:
```
Error code: JobRunnerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='codeparrot/github-code-clean' config=None split=None---
```
| closed | 2023-06-02T13:15:32Z | 2023-06-02T15:44:06Z | 2023-06-02T15:44:06Z | NN1985 |
1,738,120,670 | feat: Index the (text) datasets contents to enable full-text search - DuckDB | First part for https://github.com/huggingface/datasets-server/issues/629 (index only)
New job runner to index parquet files at a split level using duckdb
- Using parquet content, it will validate if there are string columns, if so, it will index all columns at least there is any image/audio/binary feature type (won't be supported yet since it could affect the viewer whenever search API is available, need a better design for this case).
- Will create a duckdb index file that contains the parquet + FTS search
- Will create a new branch duckdb/index in the dataset repository and upload the duckdb index file
| feat: Index the (text) datasets contents to enable full-text search - DuckDB: First part for https://github.com/huggingface/datasets-server/issues/629 (index only)
New job runner to index parquet files at a split level using duckdb
- Using parquet content, it will validate if there are string columns, if so, it will index all columns at least there is any image/audio/binary feature type (won't be supported yet since it could affect the viewer whenever search API is available, need a better design for this case).
- Will create a duckdb index file that contains the parquet + FTS search
- Will create a new branch duckdb/index in the dataset repository and upload the duckdb index file
| closed | 2023-06-02T12:31:48Z | 2023-06-27T13:46:57Z | 2023-06-27T13:41:17Z | AndreaFrancis |
1,737,877,549 | In /valid, associate an enum to every dataset | Instead of a list of datasets:
```json
{
"valid": [
"0-hero/OIG-small-chip2",
"000alen/semantic",
"04-07-22/wep-probes",
"0721boy/nva-pic",
...
]
}
```
we could return an object, where the datasets are the keys, and the values are lists of available features: `"preview" | "parquet" | "parquet-sample"`
```json
{
"0-hero/OIG-small-chip2": ["preview"],
"000alen/semantic": ["preview", "parquet"],
"04-07-22/wep-probes": ["preview", "parquet-sample"],
"0721boy/nva-pic": [],
...
}
```
The list of features for each dataset helps the Hub show the appropriate text/icons/UI.
Note that we might want to change the name of the endpoint to avoid breaking change, and to be more descriptive: `/features-by-dataset` (I don't like it, because `features` has another meaning in datasets).
Also: the response could take too long to be computed online as it's currently done (it already takes about 300ms to compute), so we should precompute it regularly with a cronjob and cache the result. See #891 | In /valid, associate an enum to every dataset: Instead of a list of datasets:
```json
{
"valid": [
"0-hero/OIG-small-chip2",
"000alen/semantic",
"04-07-22/wep-probes",
"0721boy/nva-pic",
...
]
}
```
we could return an object, where the datasets are the keys, and the values are lists of available features: `"preview" | "parquet" | "parquet-sample"`
```json
{
"0-hero/OIG-small-chip2": ["preview"],
"000alen/semantic": ["preview", "parquet"],
"04-07-22/wep-probes": ["preview", "parquet-sample"],
"0721boy/nva-pic": [],
...
}
```
The list of features for each dataset helps the Hub show the appropriate text/icons/UI.
Note that we might want to change the name of the endpoint to avoid breaking change, and to be more descriptive: `/features-by-dataset` (I don't like it, because `features` has another meaning in datasets).
Also: the response could take too long to be computed online as it's currently done (it already takes about 300ms to compute), so we should precompute it regularly with a cronjob and cache the result. See #891 | closed | 2023-06-02T10:01:25Z | 2023-06-14T11:56:33Z | 2023-06-14T11:56:32Z | severo |
1,737,836,097 | Update huggingface-hub dependency to 0.15.1 version | Close #1291. | Update huggingface-hub dependency to 0.15.1 version: Close #1291. | closed | 2023-06-02T09:38:58Z | 2023-06-02T11:48:19Z | 2023-06-02T11:45:25Z | albertvillanova |
1,737,714,796 | fix: ๐ dataset viewer is valid if any of first-rows work | we were only considering a dataset supports the dataset viewer when the first-rows were generated with the "streaming" mode. | fix: ๐ dataset viewer is valid if any of first-rows work: we were only considering a dataset supports the dataset viewer when the first-rows were generated with the "streaming" mode. | closed | 2023-06-02T08:35:06Z | 2023-06-02T09:13:45Z | 2023-06-02T09:10:48Z | severo |
1,737,119,605 | Pagination for configs with <100 items | ### Link
https://huggingface.co/datasets/yjernite/stable-bias_grounding-images_multimodel_3_12_22_clusters
### Description
I have a couple of datasets where I can't see all items in the viewer: the first page always shows the first 11 items, but pagination is only triggered for configs with >100 items
Other example:
- https://huggingface.co/datasets/yjernite/prof_images_blip__prompthero-openjourney-v4
| Pagination for configs with <100 items: ### Link
https://huggingface.co/datasets/yjernite/stable-bias_grounding-images_multimodel_3_12_22_clusters
### Description
I have a couple of datasets where I can't see all items in the viewer: the first page always shows the first 11 items, but pagination is only triggered for configs with >100 items
Other example:
- https://huggingface.co/datasets/yjernite/prof_images_blip__prompthero-openjourney-v4
| closed | 2023-06-01T21:26:47Z | 2023-10-05T12:43:48Z | 2023-07-03T08:24:58Z | yjernite |
1,736,522,030 | Upgrade huggingface_hub to 0.15.1 | https://github.com/huggingface/huggingface_hub/releases/tag/v0.15.0
- use `run_as_future=True` in parquet upload?
- `list_datasets` breaking change should not affect us | Upgrade huggingface_hub to 0.15.1: https://github.com/huggingface/huggingface_hub/releases/tag/v0.15.0
- use `run_as_future=True` in parquet upload?
- `list_datasets` breaking change should not affect us | closed | 2023-06-01T15:00:56Z | 2023-06-02T11:45:27Z | 2023-06-02T11:45:27Z | severo |
1,736,434,556 | Dataset Viewer issue for Venkatesh4342/augmented_ner | ### Link
https://huggingface.co/datasets/Venkatesh4342/augmented_ner
### Description
The dataset viewer is not working for dataset Venkatesh4342/augmented_ner.
Error details:
```
Error code: ResponseNotReady
```
couldn't upload a dataset in json format | Dataset Viewer issue for Venkatesh4342/augmented_ner: ### Link
https://huggingface.co/datasets/Venkatesh4342/augmented_ner
### Description
The dataset viewer is not working for dataset Venkatesh4342/augmented_ner.
Error details:
```
Error code: ResponseNotReady
```
couldn't upload a dataset in json format | closed | 2023-06-01T14:18:20Z | 2023-06-02T08:34:02Z | 2023-06-02T05:22:10Z | Venkatesh3132003 |
1,736,392,252 | feat: ๐ธ remove temporal workers | the cache has been filled, no more jobs to be run
<img width="1160" alt="Capture dโeฬcran 2023-06-01 aฬ 15 58 47" src="https://github.com/huggingface/datasets-server/assets/1676121/4bb8742e-3fbd-4525-a86f-ce6db22ec794">
| feat: ๐ธ remove temporal workers: the cache has been filled, no more jobs to be run
<img width="1160" alt="Capture dโeฬcran 2023-06-01 aฬ 15 58 47" src="https://github.com/huggingface/datasets-server/assets/1676121/4bb8742e-3fbd-4525-a86f-ce6db22ec794">
| closed | 2023-06-01T13:58:11Z | 2023-06-01T14:05:34Z | 2023-06-01T14:02:34Z | severo |
1,736,272,497 | Set the list of error codes to retry | Currently we don't retry. | Set the list of error codes to retry: Currently we don't retry. | closed | 2023-06-01T12:57:11Z | 2024-02-06T14:36:09Z | 2024-02-06T14:36:09Z | severo |
1,736,263,106 | refactor: first-rows-from-parquet use same code as in /rows | Almost all logic implemented in `/rows` to read the parquet files was moved to `parquet.py` in `libcommon` so that, it can be used in job runner first-rows-from-parquet and will be used also for [https://github.com/huggingface/datasets-server/issues/1087 (URLs reading).](https://github.com/huggingface/datasets-server/pull/1296) (Index duckdb from parquet)
Note:
- Almost all the code used in `/rows` related to parquet logic was moved to `libcommon.parquet_utils.py` (No logic changed, just step profiler method name)
- `get_previous_step_or_raise` method was moved from `worker.utils.py` to `libcommon.simple_cache.py` so that it can be used inside `libcommon.parquet_utils.py`, that is why there are many files changed (most of them job_runners) due to imports.
| refactor: first-rows-from-parquet use same code as in /rows: Almost all logic implemented in `/rows` to read the parquet files was moved to `parquet.py` in `libcommon` so that, it can be used in job runner first-rows-from-parquet and will be used also for [https://github.com/huggingface/datasets-server/issues/1087 (URLs reading).](https://github.com/huggingface/datasets-server/pull/1296) (Index duckdb from parquet)
Note:
- Almost all the code used in `/rows` related to parquet logic was moved to `libcommon.parquet_utils.py` (No logic changed, just step profiler method name)
- `get_previous_step_or_raise` method was moved from `worker.utils.py` to `libcommon.simple_cache.py` so that it can be used inside `libcommon.parquet_utils.py`, that is why there are many files changed (most of them job_runners) due to imports.
| closed | 2023-06-01T12:52:30Z | 2023-06-09T18:05:31Z | 2023-06-09T18:05:29Z | AndreaFrancis |
1,736,262,987 | Use X-Request-ID header in the logs | Requests received from the Hub contain the `X-Request-ID` header, which helps linking issues between the different systems. We should show that info in the logs when relevant. | Use X-Request-ID header in the logs: Requests received from the Hub contain the `X-Request-ID` header, which helps linking issues between the different systems. We should show that info in the logs when relevant. | open | 2023-06-01T12:52:25Z | 2023-08-11T15:14:48Z | null | severo |
1,736,259,877 | Create a cronjob to delete dangling cache entries | from the cache database
related to https://github.com/huggingface/datasets-server/issues/1284 | Create a cronjob to delete dangling cache entries: from the cache database
related to https://github.com/huggingface/datasets-server/issues/1284 | closed | 2023-06-01T12:50:39Z | 2024-02-06T14:35:22Z | 2024-02-06T14:35:22Z | severo |
1,736,258,389 | Create a cron job to clean the dangling assets and cached assets | Related to https://github.com/huggingface/datasets-server/issues/1122 | Create a cron job to clean the dangling assets and cached assets: Related to https://github.com/huggingface/datasets-server/issues/1122 | open | 2023-06-01T12:49:52Z | 2024-02-06T14:33:52Z | null | severo |
1,736,230,825 | Use one app token per scope | In particular:
- one app token to commit parquet files (associated with user [`parquet-converter`](https://huggingface.co/parquet-converter))
- one app token to get access to the gated datasets | Use one app token per scope: In particular:
- one app token to commit parquet files (associated with user [`parquet-converter`](https://huggingface.co/parquet-converter))
- one app token to get access to the gated datasets | closed | 2023-06-01T12:33:41Z | 2023-06-14T11:55:21Z | 2023-06-14T11:55:20Z | severo |
1,735,785,269 | Remove torchaudio dependency | Remove `torchaudio` dependency.
Fix partially #1281. | Remove torchaudio dependency: Remove `torchaudio` dependency.
Fix partially #1281. | closed | 2023-06-01T08:39:39Z | 2023-06-01T12:27:02Z | 2023-06-01T12:23:42Z | albertvillanova |
1,735,766,964 | Remove dependency on torch and torchaudio | As suggested by @severo:
> should we remove the dependency to torch and torchaudio? cc @polinaeterna
- [ ] #1282
- [ ] Remove torch | Remove dependency on torch and torchaudio: As suggested by @severo:
> should we remove the dependency to torch and torchaudio? cc @polinaeterna
- [ ] #1282
- [ ] Remove torch | closed | 2023-06-01T08:27:45Z | 2023-06-28T08:06:08Z | 2023-06-28T08:03:19Z | albertvillanova |
1,735,761,162 | Use soundfile for mp3 decoding instead of torchaudio | As suggested by @severo:
> Use soundfile for mp3 decoding instead of torchaudio by @polinaeterna in https://github.com/huggingface/datasets/pull/5573
>
> - this allows to not have dependencies on pytorch to decode audio files
> - this was possible with soundfile 0.12 which bundles libsndfile binaries at a recent version with MP3 support | Use soundfile for mp3 decoding instead of torchaudio: As suggested by @severo:
> Use soundfile for mp3 decoding instead of torchaudio by @polinaeterna in https://github.com/huggingface/datasets/pull/5573
>
> - this allows to not have dependencies on pytorch to decode audio files
> - this was possible with soundfile 0.12 which bundles libsndfile binaries at a recent version with MP3 support | closed | 2023-06-01T08:23:44Z | 2023-06-02T09:09:55Z | 2023-06-02T09:09:55Z | albertvillanova |
1,735,757,059 | Use soundfile for mp3 decoding instead of torchaudio | null | Use soundfile for mp3 decoding instead of torchaudio: | closed | 2023-06-01T08:20:59Z | 2023-06-01T08:45:02Z | 2023-06-01T08:45:02Z | albertvillanova |
1,735,752,157 | Move pytest-asyncio dependency from main to dev | Move `pytest-asyncio` dependency from main to dev category.
This was introduced by:
- #1044 | Move pytest-asyncio dependency from main to dev: Move `pytest-asyncio` dependency from main to dev category.
This was introduced by:
- #1044 | closed | 2023-06-01T08:17:46Z | 2023-06-01T12:08:45Z | 2023-06-01T12:05:42Z | albertvillanova |
1,734,864,554 | Dataset Viewer issue for ms_marco | ### Link
https://huggingface.co/datasets/ms_marco
### Description
The dataset viewer is not working for dataset ms_marco.
Error details:
```
Error code: JobRunnerCrashedError
```
| Dataset Viewer issue for ms_marco: ### Link
https://huggingface.co/datasets/ms_marco
### Description
The dataset viewer is not working for dataset ms_marco.
Error details:
```
Error code: JobRunnerCrashedError
```
| closed | 2023-05-31T19:23:53Z | 2023-06-01T04:46:22Z | 2023-06-01T04:46:22Z | jxmorris12 |
1,734,379,040 | Fix link to first_rows docs | Now it gives 404 error. | Fix link to first_rows docs: Now it gives 404 error. | closed | 2023-05-31T14:35:16Z | 2023-06-01T12:05:21Z | 2023-06-01T12:01:48Z | albertvillanova |
1,734,307,847 | minor doc fixes | null | minor doc fixes: | closed | 2023-05-31T14:02:34Z | 2023-05-31T14:06:27Z | 2023-05-31T14:03:11Z | lhoestq |
1,734,267,573 | Dataset Viewer issue for declare-lab/TangoPromptBank | ### Link
https://huggingface.co/datasets/declare-lab/TangoPromptBank
### Description
The dataset viewer is not working for dataset declare-lab/TangoPromptBank.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for declare-lab/TangoPromptBank: ### Link
https://huggingface.co/datasets/declare-lab/TangoPromptBank
### Description
The dataset viewer is not working for dataset declare-lab/TangoPromptBank.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-31T13:44:14Z | 2023-05-31T14:17:54Z | 2023-05-31T14:17:54Z | soujanyaporia |
1,734,064,654 | Use original parquet files if present, instead of re-generating | When a dataset is already in parquet format, we should try to use them in `config-parquet-and-dataset-info` instead of creating brand new files.
It would help fostering the use of parquet by default. It would provide an instant dataset viewer.
Notes:
- it should be "free" because the original parquet files are already stored with LFS, so it's just a matter of creating a new pointer to them
- we should take care of the size: maybe re-convert if the parquet files are too big or too small, in order to ensure our services (/rows, stats, search) will work as expected
- in the same vein: we should check if the row group size is coherent with the type of data. In the dataset viewer, we use a smaller row group size for images or audio, to help /rows be reactive enough.
- maybe the same parquet file is used for multiple configs, or splits, through the dataset script. What to do in that case? Maybe we should apply the "shortcut" only when the dataset has no script | Use original parquet files if present, instead of re-generating: When a dataset is already in parquet format, we should try to use them in `config-parquet-and-dataset-info` instead of creating brand new files.
It would help fostering the use of parquet by default. It would provide an instant dataset viewer.
Notes:
- it should be "free" because the original parquet files are already stored with LFS, so it's just a matter of creating a new pointer to them
- we should take care of the size: maybe re-convert if the parquet files are too big or too small, in order to ensure our services (/rows, stats, search) will work as expected
- in the same vein: we should check if the row group size is coherent with the type of data. In the dataset viewer, we use a smaller row group size for images or audio, to help /rows be reactive enough.
- maybe the same parquet file is used for multiple configs, or splits, through the dataset script. What to do in that case? Maybe we should apply the "shortcut" only when the dataset has no script | closed | 2023-05-31T12:02:07Z | 2023-06-09T17:41:14Z | 2023-06-09T17:41:14Z | severo |
1,734,042,603 | Store the list of parquet shards on the Hub? | We could reorganize the way we store the parquet files on the Hub:
- one directory per config (already done)
- then, one directory per split
- then, one file per shard, with name `n.parquet` (starting at `0.parquet`)
- plus: a file `shards.json` which is a JSON file with an "ordered" array of the shard URLs, ie: `["https://huggingface.co/datasets/LLMs/Alpaca-ShareGPT/parquet/train/0.parquet", "https://huggingface.co/datasets/LLMs/Alpaca-ShareGPT/parquet/train/1.parquet", ..., "https://huggingface.co/datasets/LLMs/Alpaca-ShareGPT/parquet/train/13.parquet"]`
Alternative: don't change the directory structure, nor the filenames, but use redirections. Anyway, add the shard.json files (one per split)
| Store the list of parquet shards on the Hub?: We could reorganize the way we store the parquet files on the Hub:
- one directory per config (already done)
- then, one directory per split
- then, one file per shard, with name `n.parquet` (starting at `0.parquet`)
- plus: a file `shards.json` which is a JSON file with an "ordered" array of the shard URLs, ie: `["https://huggingface.co/datasets/LLMs/Alpaca-ShareGPT/parquet/train/0.parquet", "https://huggingface.co/datasets/LLMs/Alpaca-ShareGPT/parquet/train/1.parquet", ..., "https://huggingface.co/datasets/LLMs/Alpaca-ShareGPT/parquet/train/13.parquet"]`
Alternative: don't change the directory structure, nor the filenames, but use redirections. Anyway, add the shard.json files (one per split)
| closed | 2023-05-31T11:52:55Z | 2023-06-30T16:33:31Z | 2023-06-30T16:33:30Z | severo |
1,734,007,521 | Change the name of the committer on the Hub | Currently, the parquet files are committed by https://huggingface.co/francky (see https://huggingface.co/datasets/glue/commits/refs%2Fconvert%2Fparquet).
Create a dedicated user like `parquet-converter` or `converter` or `datasets-server`, or...
Alternative: use an app token to do the commits. I understand that we should soon have the feature, right @SBrandeis? | Change the name of the committer on the Hub: Currently, the parquet files are committed by https://huggingface.co/francky (see https://huggingface.co/datasets/glue/commits/refs%2Fconvert%2Fparquet).
Create a dedicated user like `parquet-converter` or `converter` or `datasets-server`, or...
Alternative: use an app token to do the commits. I understand that we should soon have the feature, right @SBrandeis? | closed | 2023-05-31T11:35:42Z | 2023-06-14T12:09:18Z | 2023-06-14T12:09:18Z | severo |
1,733,995,212 | Send a notification to the dataset owner on parquet files creation | Once the parquet files have been created, we could send a notification to the dataset maintainer, to let them know, with the goal to increase the usage of these files.
Note: an idea by @julien-c when we started working on parquet conversion was to do it through the PR mechanism. It would have helped in that aspect, since the dataset maintainers would have received the notification as with any other PR. | Send a notification to the dataset owner on parquet files creation: Once the parquet files have been created, we could send a notification to the dataset maintainer, to let them know, with the goal to increase the usage of these files.
Note: an idea by @julien-c when we started working on parquet conversion was to do it through the PR mechanism. It would have helped in that aspect, since the dataset maintainers would have received the notification as with any other PR. | closed | 2023-05-31T11:27:47Z | 2023-09-25T08:27:09Z | 2023-09-25T08:27:09Z | severo |
1,732,827,650 | [doc] Query datasets from Datasets Server: split into one page per library? | i.e. having a specific page for DuckDB would be very cool i think | [doc] Query datasets from Datasets Server: split into one page per library?: i.e. having a specific page for DuckDB would be very cool i think | closed | 2023-05-30T19:10:54Z | 2023-06-30T16:20:16Z | 2023-06-30T16:20:15Z | julien-c |
1,732,824,995 | doc typo | null | doc typo: | closed | 2023-05-30T19:09:06Z | 2023-05-31T12:30:43Z | 2023-05-31T12:26:54Z | julien-c |
1,732,634,748 | update openapi | added /rows and fixed minor things | update openapi: added /rows and fixed minor things | closed | 2023-05-30T16:54:23Z | 2023-05-31T13:35:23Z | 2023-05-31T13:32:23Z | lhoestq |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.