id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
β | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,732,634,565 | Add /rows docs | null | Add /rows docs: | closed | 2023-05-30T16:54:13Z | 2023-05-31T13:48:19Z | 2023-05-31T13:32:09Z | lhoestq |
1,732,199,452 | Dataset Viewer issue for dineshpatil341341/demo | ### Link
https://huggingface.co/datasets/dineshpatil341341/demo
### Description
The dataset viewer is not working for dataset dineshpatil341341/demo.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for dineshpatil341341/demo: ### Link
https://huggingface.co/datasets/dineshpatil341341/demo
### Description
The dataset viewer is not working for dataset dineshpatil341341/demo.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-30T12:51:33Z | 2023-05-31T05:35:12Z | 2023-05-31T05:35:12Z | kishorgujjar |
1,731,750,704 | Fix missing slash in admin endpoints | As discussed with @severo, this PR fixes the missing slash in some admin endpoints, e.g.:
- https://datasets-server.huggingface.co/admin/force-refreshdataset-config-names
After this PR, it will be:
- https://datasets-server.huggingface.co/admin/force-refresh/dataset-config-names
Related to:
- #1246 | Fix missing slash in admin endpoints: As discussed with @severo, this PR fixes the missing slash in some admin endpoints, e.g.:
- https://datasets-server.huggingface.co/admin/force-refreshdataset-config-names
After this PR, it will be:
- https://datasets-server.huggingface.co/admin/force-refresh/dataset-config-names
Related to:
- #1246 | closed | 2023-05-30T08:13:28Z | 2023-05-31T12:54:23Z | 2023-05-30T12:49:43Z | albertvillanova |
1,730,971,046 | Part #2 - Adding "partition" field on queue and cache db | Part of https://github.com/huggingface/datasets-server/issues/1087, adding partition field in queue and cache collections. | Part #2 - Adding "partition" field on queue and cache db: Part of https://github.com/huggingface/datasets-server/issues/1087, adding partition field in queue and cache collections. | closed | 2023-05-29T15:51:45Z | 2023-10-10T13:29:31Z | 2023-06-01T14:37:23Z | AndreaFrancis |
1,729,518,701 | Dataset Viewer issue for Muennighoff/xP3x | ### Link
https://huggingface.co/datasets/Muennighoff/xP3x
### Description
The dataset viewer is not working for dataset Muennighoff/xP3x.
Error details:
```
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
```
This dataset can be loaded & works just fine, however, the hub displays: The dataset is currently empty. [Upload or create new data files](https://huggingface.co/datasets/Muennighoff/xP3x/tree/main). Then, you will be able to explore them in the Dataset Viewer.
Not sure if this is a datasets or hub bug
| Dataset Viewer issue for Muennighoff/xP3x: ### Link
https://huggingface.co/datasets/Muennighoff/xP3x
### Description
The dataset viewer is not working for dataset Muennighoff/xP3x.
Error details:
```
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
```
This dataset can be loaded & works just fine, however, the hub displays: The dataset is currently empty. [Upload or create new data files](https://huggingface.co/datasets/Muennighoff/xP3x/tree/main). Then, you will be able to explore them in the Dataset Viewer.
Not sure if this is a datasets or hub bug
| closed | 2023-05-28T09:53:38Z | 2023-05-30T06:58:28Z | 2023-05-30T06:58:28Z | Muennighoff |
1,728,607,825 | Dataset Viewer issue for minioh1234/martin_valen_dataset | ### Link
https://huggingface.co/datasets/minioh1234/martin_valen_dataset
### Description
The dataset viewer is not working for dataset minioh1234/martin_valen_dataset.
Error details:
```
Error code: ResponseNotReady
```
The dataset preview is not available for this dataset.
The server is busier than usual and the response is not ready yet. Please retry later.
Error code: ResponseNotReady | Dataset Viewer issue for minioh1234/martin_valen_dataset: ### Link
https://huggingface.co/datasets/minioh1234/martin_valen_dataset
### Description
The dataset viewer is not working for dataset minioh1234/martin_valen_dataset.
Error details:
```
Error code: ResponseNotReady
```
The dataset preview is not available for this dataset.
The server is busier than usual and the response is not ready yet. Please retry later.
Error code: ResponseNotReady | closed | 2023-05-27T09:44:16Z | 2023-06-26T15:38:53Z | 2023-06-26T15:38:53Z | ohchangmin |
1,727,982,372 | feat: πΈ create orchestrator | In this PR:
1. we centralize all the operations on the graph, within DatasetOrchestrator:
- `.set_revision`: used by the webhook, sets the current git revision for a dataset, which will refresh the "root" steps that has a different revision (and by cascade, all the graph)
- `.finish_job`: used by the workers (and by the zombies/long job killers) to finish a job, put the data in the cache, and create the new jobs if needed (it also deletes duplicate jobs if any)
- `.has_some_cache`: used in the API endpoints, when a cache entry is not found. It's an implementation detail:
- if the dataset has no cache entry, we do a call to the Hub to get the revision -> if we get an answer, we create the jobs with the revision, if not we return 404 (the dataset does not exist, or is not supported).
- else: if the cache already has entries, we will check if the asked cache entry could exist at one point (see `.has_pending_ancestor_jobs` below)
- `.has_pending_ancestor_jobs`: used in the API endpoints, when a cache entry is not found and the dataset already has some other cache entries. It checks if the requested steps, or any of their ancestors, has some pending jobs. If so, we ask the user to retry later, else: 404 because the cache entry will never exist (possibly the config or the split parameter is wrong)
- `.backfill`: a full analysis of the cache and queue for a dataset, to determine if the state is normal, and if not, which jobs should be created or deleted.
2. use the appropriate orchestrator methods (which try to be as fast as possible) instead of the previous behavior, which was to always call `.backfill`, and was too heavy with unnecessary operations, in particular for datasets with a lot of configs or splits.
Also note that the jobs should now be finished with the correct status: SUCCESS or ERROR. Until now (since we used backfill), they were all put in CANCELLED status, which didn't help with monitoring. | feat: πΈ create orchestrator: In this PR:
1. we centralize all the operations on the graph, within DatasetOrchestrator:
- `.set_revision`: used by the webhook, sets the current git revision for a dataset, which will refresh the "root" steps that has a different revision (and by cascade, all the graph)
- `.finish_job`: used by the workers (and by the zombies/long job killers) to finish a job, put the data in the cache, and create the new jobs if needed (it also deletes duplicate jobs if any)
- `.has_some_cache`: used in the API endpoints, when a cache entry is not found. It's an implementation detail:
- if the dataset has no cache entry, we do a call to the Hub to get the revision -> if we get an answer, we create the jobs with the revision, if not we return 404 (the dataset does not exist, or is not supported).
- else: if the cache already has entries, we will check if the asked cache entry could exist at one point (see `.has_pending_ancestor_jobs` below)
- `.has_pending_ancestor_jobs`: used in the API endpoints, when a cache entry is not found and the dataset already has some other cache entries. It checks if the requested steps, or any of their ancestors, has some pending jobs. If so, we ask the user to retry later, else: 404 because the cache entry will never exist (possibly the config or the split parameter is wrong)
- `.backfill`: a full analysis of the cache and queue for a dataset, to determine if the state is normal, and if not, which jobs should be created or deleted.
2. use the appropriate orchestrator methods (which try to be as fast as possible) instead of the previous behavior, which was to always call `.backfill`, and was too heavy with unnecessary operations, in particular for datasets with a lot of configs or splits.
Also note that the jobs should now be finished with the correct status: SUCCESS or ERROR. Until now (since we used backfill), they were all put in CANCELLED status, which didn't help with monitoring. | closed | 2023-05-26T17:19:52Z | 2023-06-01T12:47:42Z | 2023-06-01T12:44:38Z | severo |
1,727,570,771 | feat: Part #1 - New processing step to calculate/get split partitions | Part of https://github.com/huggingface/datasets-server/issues/1087
Given a chunk size and split row number, this job runner will generate partitions for a split.
Part of this code was already introduced in https://github.com/huggingface/datasets-server/pull/1213 but maybe is better to separate the PR in one for "partitions" calculation and another to add the new granularity logic.
Sample output cache having a chunk_size=50:
```
{
"num_rows": 150,
"partitions": [
{
"dataset": "dataset_ok",
"config": "config_ok",
"split": "split_ok",
"partition": "0-99",
},
{
"dataset": "dataset_ok",
"config": "config_ok",
"split": "split_ok",
"partition": "100-150",
},
]
}
```
| feat: Part #1 - New processing step to calculate/get split partitions: Part of https://github.com/huggingface/datasets-server/issues/1087
Given a chunk size and split row number, this job runner will generate partitions for a split.
Part of this code was already introduced in https://github.com/huggingface/datasets-server/pull/1213 but maybe is better to separate the PR in one for "partitions" calculation and another to add the new granularity logic.
Sample output cache having a chunk_size=50:
```
{
"num_rows": 150,
"partitions": [
{
"dataset": "dataset_ok",
"config": "config_ok",
"split": "split_ok",
"partition": "0-99",
},
{
"dataset": "dataset_ok",
"config": "config_ok",
"split": "split_ok",
"partition": "100-150",
},
]
}
```
| closed | 2023-05-26T12:51:53Z | 2024-01-26T11:56:03Z | 2023-06-01T14:37:20Z | AndreaFrancis |
1,727,345,980 | Update doc index | null | Update doc index: | closed | 2023-05-26T10:27:55Z | 2023-05-26T18:04:37Z | 2023-05-26T18:01:20Z | lhoestq |
1,726,513,702 | Generate 5GB parquet files for big datasets | For datasets over 5GB, let's generate 5GB parquet files (with shards) instead of ignoring them. The fact that the dataset was truncated should be stored somewhere.
---
Currently, datasets server gets stores parquet files if the dataset size is less than PARQUET_AND_INFO_MAX_DATASET_SIZE config.
- `PARQUET_AND_INFO_MAX_DATASET_SIZE`: the maximum size in bytes of the dataset to pre-compute the parquet files. Bigger datasets, or datasets without that information, are ignored. Defaults to `100_000_000`.
For future implementations/features we would probably need a full dataset reading, big datasets won't be available for it.
There are a couple of suggestions to mitigate excluding big datasets from full reading:
- Copy at least first x GB of data
- If a dataset is already in parquet files copy them directly instead of processing
- To have a data lake?
| Generate 5GB parquet files for big datasets: For datasets over 5GB, let's generate 5GB parquet files (with shards) instead of ignoring them. The fact that the dataset was truncated should be stored somewhere.
---
Currently, datasets server gets stores parquet files if the dataset size is less than PARQUET_AND_INFO_MAX_DATASET_SIZE config.
- `PARQUET_AND_INFO_MAX_DATASET_SIZE`: the maximum size in bytes of the dataset to pre-compute the parquet files. Bigger datasets, or datasets without that information, are ignored. Defaults to `100_000_000`.
For future implementations/features we would probably need a full dataset reading, big datasets won't be available for it.
There are a couple of suggestions to mitigate excluding big datasets from full reading:
- Copy at least first x GB of data
- If a dataset is already in parquet files copy them directly instead of processing
- To have a data lake?
| closed | 2023-05-25T21:29:19Z | 2023-07-03T15:40:33Z | 2023-07-03T15:40:33Z | AndreaFrancis |
1,726,417,949 | Separate opt in out urls scan | Part of the second approach for spawning full scan https://github.com/huggingface/datasets-server/issues/1087
"Run full scan in separated Jobs and store results in separated cache entries".
I am moving logic to inspect if a split has image URL columns to another step `"split-image-url-columns"`.
Now, `"split-opt-in-out-urls-scan"` will depend on `"split-image-url-columns"`.
I am also adding validation on the new step to consider only those image URLs as discussed in https://huggingface.slack.com/archives/C0311GZ7R6K/p1684962431285069.
| Separate opt in out urls scan: Part of the second approach for spawning full scan https://github.com/huggingface/datasets-server/issues/1087
"Run full scan in separated Jobs and store results in separated cache entries".
I am moving logic to inspect if a split has image URL columns to another step `"split-image-url-columns"`.
Now, `"split-opt-in-out-urls-scan"` will depend on `"split-image-url-columns"`.
I am also adding validation on the new step to consider only those image URLs as discussed in https://huggingface.slack.com/archives/C0311GZ7R6K/p1684962431285069.
| closed | 2023-05-25T20:02:57Z | 2023-05-26T12:32:03Z | 2023-05-26T12:28:44Z | AndreaFrancis |
1,726,061,255 | Fix audio data in pagination of audio datasets | Currently pagination is only enabled for testing purposes on [arabic_speech_corpus](https://huggingface.co/datasets/arabic_speech_corpus) but times out because the "transform to list" step that writes the audio files to disk takes too much time.
Currently it writes both MP3 and WAV - but we should find which one is faster and only write this one.
This is not super high prio for now though, since there aren't a lot of audio dataset with pagination | Fix audio data in pagination of audio datasets: Currently pagination is only enabled for testing purposes on [arabic_speech_corpus](https://huggingface.co/datasets/arabic_speech_corpus) but times out because the "transform to list" step that writes the audio files to disk takes too much time.
Currently it writes both MP3 and WAV - but we should find which one is faster and only write this one.
This is not super high prio for now though, since there aren't a lot of audio dataset with pagination | closed | 2023-05-25T15:36:49Z | 2023-09-15T07:59:54Z | 2023-09-15T07:59:53Z | lhoestq |
1,726,036,223 | Opt in/out scan only image urls | Context: https://huggingface.slack.com/archives/C0311GZ7R6K/p1684962431285069
Before, datasets-server scanned all url columns for spawning opt-in/out, now it will filter those image URLs only. | Opt in/out scan only image urls: Context: https://huggingface.slack.com/archives/C0311GZ7R6K/p1684962431285069
Before, datasets-server scanned all url columns for spawning opt-in/out, now it will filter those image URLs only. | closed | 2023-05-25T15:22:11Z | 2023-10-10T13:29:48Z | 2023-05-25T20:02:46Z | AndreaFrancis |
1,725,729,653 | Revert "fix: π finish the job before backfilling, to get the status (β¦ | β¦#1252)"
This reverts commit 1cbd9ede2ea7de7f93662c0e802cb77d378eac3c.
The backfill() step still lasts too long in the workers, for datasets with a lot of configs/splits, leading to concurrency issues. As we don't have prometheus metrics for the workers, we cannot benchmark on prod data.
Reverting, and I will find another solution to finish the jobs with the right status. | Revert "fix: π finish the job before backfilling, to get the status (β¦: β¦#1252)"
This reverts commit 1cbd9ede2ea7de7f93662c0e802cb77d378eac3c.
The backfill() step still lasts too long in the workers, for datasets with a lot of configs/splits, leading to concurrency issues. As we don't have prometheus metrics for the workers, we cannot benchmark on prod data.
Reverting, and I will find another solution to finish the jobs with the right status. | closed | 2023-05-25T12:28:51Z | 2023-05-25T12:32:43Z | 2023-05-25T12:29:03Z | severo |
1,725,651,095 | fix: π finish the job before backfilling, to get the status | instead of finishing all the jobs with CANCELLED though backfill(), first finish the job with SUCCESS or ERROR, then backfill. | fix: π finish the job before backfilling, to get the status: instead of finishing all the jobs with CANCELLED though backfill(), first finish the job with SUCCESS or ERROR, then backfill. | closed | 2023-05-25T11:41:32Z | 2023-05-25T11:57:23Z | 2023-05-25T11:53:46Z | severo |
1,725,621,370 | Simplify queue (jobs are now only WAITING or STARTED) | instead of changing its status to cancelled | Simplify queue (jobs are now only WAITING or STARTED): instead of changing its status to cancelled | closed | 2023-05-25T11:20:25Z | 2023-05-25T11:30:33Z | 2023-05-25T11:27:38Z | severo |
1,725,367,672 | fix: π delete pending jobs for other revisions | when backfilling a new revision, all pending jobs for other revisions (be it started or waiting) are canceled. | fix: π delete pending jobs for other revisions: when backfilling a new revision, all pending jobs for other revisions (be it started or waiting) are canceled. | closed | 2023-05-25T08:46:09Z | 2023-05-25T09:19:39Z | 2023-05-25T09:16:42Z | severo |
1,724,801,888 | feat: πΈ increase number of parallel jobs for the same namespace | null | feat: πΈ increase number of parallel jobs for the same namespace: | closed | 2023-05-24T22:08:41Z | 2023-05-24T22:13:16Z | 2023-05-24T22:09:41Z | severo |
1,724,798,036 | Dataset Viewer issue for TempoFunk/big | ### Link
https://huggingface.co/datasets/TempoFunk/big
### Description
The dataset viewer is not working for dataset TempoFunk/big.
Error details:
```
Error code: JobManagerCrashedError
```
| Dataset Viewer issue for TempoFunk/big: ### Link
https://huggingface.co/datasets/TempoFunk/big
### Description
The dataset viewer is not working for dataset TempoFunk/big.
Error details:
```
Error code: JobManagerCrashedError
```
| closed | 2023-05-24T22:04:25Z | 2023-05-25T05:31:37Z | 2023-05-25T05:31:36Z | chavinlo |
1,724,745,733 | feat: πΈ create all jobs in backfill in one operation | instead of one (or multiple) operations for every job creation | feat: πΈ create all jobs in backfill in one operation: instead of one (or multiple) operations for every job creation | closed | 2023-05-24T21:12:04Z | 2023-05-24T21:50:15Z | 2023-05-24T21:47:17Z | severo |
1,724,437,820 | Rename `/config-names` processing step | the last change of graph steps names in https://github.com/huggingface/datasets-server/issues/1086
next: endpoints and docs :)
**reminder: don't forget to stop the workers before mirgrations!** | Rename `/config-names` processing step: the last change of graph steps names in https://github.com/huggingface/datasets-server/issues/1086
next: endpoints and docs :)
**reminder: don't forget to stop the workers before mirgrations!** | closed | 2023-05-24T17:15:54Z | 2023-05-26T09:22:49Z | 2023-05-26T09:19:42Z | polinaeterna |
1,724,091,985 | Reduce requests to mongo (deleteMany) | Do only one request to mongo to delete multiple jobs, instead of one per job deletion. | Reduce requests to mongo (deleteMany): Do only one request to mongo to delete multiple jobs, instead of one per job deletion. | closed | 2023-05-24T14:07:13Z | 2023-05-24T15:23:37Z | 2023-05-24T15:19:56Z | severo |
1,723,452,447 | provide a decorator for StepProfiler (prometheus) | Currently, adding `StepProfiler` in the code, to get metrics about the duration of part of the code, implies changing the indentation, which makes it complicated to follow the commits.
It would be simpler to use a `@step_profiler()` decorator on functions.
If we do so, note that we might have to upgrade to Python 3.10 to not lose types: https://peps.python.org/pep-0612/. | provide a decorator for StepProfiler (prometheus): Currently, adding `StepProfiler` in the code, to get metrics about the duration of part of the code, implies changing the indentation, which makes it complicated to follow the commits.
It would be simpler to use a `@step_profiler()` decorator on functions.
If we do so, note that we might have to upgrade to Python 3.10 to not lose types: https://peps.python.org/pep-0612/. | open | 2023-05-24T08:29:00Z | 2024-06-19T14:12:34Z | null | severo |
1,723,444,261 | Dataset Viewer issue for fabraz/writingPromptAug | ### Link
https://huggingface.co/datasets/fabraz/writingPromptAug
### Description
The dataset viewer is not working for dataset fabraz/writingPromptAug.
Error details:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Split train already present
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='fabraz/writingPromptAug' config=None split=None---Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise e1 from None
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1196, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 834, in get_module
builder_kwargs["info"] = DatasetInfo._from_yaml_dict(dataset_info_dict)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/info.py", line 400, in _from_yaml_dict
yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/splits.py", line 598, in _from_yaml_list
return cls.from_split_dict(yaml_data)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/splits.py", line 570, in from_split_dict
split_dict.add(split_info)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/splits.py", line 547, in add
raise ValueError(f"Split {split_info.name} already present")
ValueError: Split train already present
```
| Dataset Viewer issue for fabraz/writingPromptAug: ### Link
https://huggingface.co/datasets/fabraz/writingPromptAug
### Description
The dataset viewer is not working for dataset fabraz/writingPromptAug.
Error details:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Split train already present
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='fabraz/writingPromptAug' config=None split=None---Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise e1 from None
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1196, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 834, in get_module
builder_kwargs["info"] = DatasetInfo._from_yaml_dict(dataset_info_dict)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/info.py", line 400, in _from_yaml_dict
yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/splits.py", line 598, in _from_yaml_list
return cls.from_split_dict(yaml_data)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/splits.py", line 570, in from_split_dict
split_dict.add(split_info)
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/splits.py", line 547, in add
raise ValueError(f"Split {split_info.name} already present")
ValueError: Split train already present
```
| closed | 2023-05-24T08:23:48Z | 2023-05-25T05:03:45Z | 2023-05-25T05:03:45Z | Patrick-Ni |
1,723,404,679 | fix: π fix order of the migrations | null | fix: π fix order of the migrations: | closed | 2023-05-24T08:00:24Z | 2023-05-24T08:06:02Z | 2023-05-24T08:02:30Z | severo |
1,722,799,399 | Adding temporal hardcoded data for opt in/out: laion/laion2B-en and kakaobrain/coyo-700m | As per discussion in https://github.com/huggingface/moon-landing/pull/6332#discussion_r1202989143
Fake data will be hardcoded on the server side so that it is consistent with what API returns and what we show in the UI.
NOTE.- This is a temporal solution, once https://github.com/huggingface/datasets-server/issues/1087 is implemented, this code can be removed.
| Adding temporal hardcoded data for opt in/out: laion/laion2B-en and kakaobrain/coyo-700m: As per discussion in https://github.com/huggingface/moon-landing/pull/6332#discussion_r1202989143
Fake data will be hardcoded on the server side so that it is consistent with what API returns and what we show in the UI.
NOTE.- This is a temporal solution, once https://github.com/huggingface/datasets-server/issues/1087 is implemented, this code can be removed.
| closed | 2023-05-23T21:17:22Z | 2023-05-24T13:13:00Z | 2023-05-24T13:09:55Z | AndreaFrancis |
1,722,558,958 | feat: πΈ add an index | recommended by mongo atlas
<img width="1060" alt="Capture dβeΜcran 2023-05-23 aΜ 20 09 57" src="https://github.com/huggingface/datasets-server/assets/1676121/e6817023-40c0-443e-a590-90115b9eee6c">
| feat: πΈ add an index: recommended by mongo atlas
<img width="1060" alt="Capture dβeΜcran 2023-05-23 aΜ 20 09 57" src="https://github.com/huggingface/datasets-server/assets/1676121/e6817023-40c0-443e-a590-90115b9eee6c">
| closed | 2023-05-23T18:11:31Z | 2023-05-23T18:26:42Z | 2023-05-23T18:23:59Z | severo |
1,722,312,510 | Add numba cache to api | this should fix issues when importing librosa in the API (it causes issues in /rows for audio datasets) | Add numba cache to api: this should fix issues when importing librosa in the API (it causes issues in /rows for audio datasets) | closed | 2023-05-23T15:28:37Z | 2023-05-24T10:47:02Z | 2023-05-24T10:44:10Z | lhoestq |
1,722,303,198 | feat: πΈ reduce the duration of the TTL index on finished_at | from 1 day to 10 minutes. Hopefully it will help reducing the time of the requests
Note also that we refactored a bit the migration script to factorize code | feat: πΈ reduce the duration of the TTL index on finished_at: from 1 day to 10 minutes. Hopefully it will help reducing the time of the requests
Note also that we refactored a bit the migration script to factorize code | closed | 2023-05-23T15:23:17Z | 2023-05-23T18:08:42Z | 2023-05-23T15:28:14Z | severo |
1,722,284,671 | Use parquet metadata for all datasets | I still keep "Audio" unsupported since there are some errors with librosa on API workers | Use parquet metadata for all datasets: I still keep "Audio" unsupported since there are some errors with librosa on API workers | closed | 2023-05-23T15:12:09Z | 2023-05-23T15:39:31Z | 2023-05-23T15:36:26Z | lhoestq |
1,722,077,731 | Use parquet metadata for more datasets | including text, audio and full hd image datasets | Use parquet metadata for more datasets: including text, audio and full hd image datasets | closed | 2023-05-23T13:22:11Z | 2023-05-23T13:50:19Z | 2023-05-23T13:46:49Z | lhoestq |
1,722,065,712 | Instrument backfill | - add StepProfiler to libcommon.state, to be able to profile the code duration when doing a backfill
- refactor code to manage prometheus from libcommon
- detail: don't put empty (0) values in cache and queue metrics if the metrics database is empty. it's ok not to have values until the background metrics job has run | Instrument backfill: - add StepProfiler to libcommon.state, to be able to profile the code duration when doing a backfill
- refactor code to manage prometheus from libcommon
- detail: don't put empty (0) values in cache and queue metrics if the metrics database is empty. it's ok not to have values until the background metrics job has run | closed | 2023-05-23T13:15:42Z | 2023-05-23T14:16:57Z | 2023-05-23T14:13:39Z | severo |
1,721,590,012 | feat: πΈ update dependencies to fix vulnerability | null | feat: πΈ update dependencies to fix vulnerability: | closed | 2023-05-23T09:08:40Z | 2023-05-23T09:12:32Z | 2023-05-23T09:09:16Z | severo |
1,721,569,438 | Reduce number of concurrent jobs in namespace | the idea is to reduce the number of pending jobs, currently we have > 200,000 jobs, from a lot of different datasets.
And as it seems like a cause of the issues with the queue is that concurrent backfill processes run at the same time for the same dataset, we reduce the concurrency drastically to 1 job per namespace (we don't have a way to limit per dataset, for now) | Reduce number of concurrent jobs in namespace: the idea is to reduce the number of pending jobs, currently we have > 200,000 jobs, from a lot of different datasets.
And as it seems like a cause of the issues with the queue is that concurrent backfill processes run at the same time for the same dataset, we reduce the concurrency drastically to 1 job per namespace (we don't have a way to limit per dataset, for now) | closed | 2023-05-23T08:58:21Z | 2023-05-23T09:20:25Z | 2023-05-23T09:17:35Z | severo |
1,721,251,296 | chore(deps): bump requests from 2.28.2 to 2.31.0 in /libs/libcommon | Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.28.2...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | chore(deps): bump requests from 2.28.2 to 2.31.0 in /libs/libcommon: Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.28.2...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-05-23T05:57:17Z | 2023-05-23T09:01:26Z | 2023-05-23T09:01:24Z | dependabot[bot] |
1,721,249,333 | chore(deps-dev): bump requests from 2.28.2 to 2.31.0 in /e2e | Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.28.2...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | chore(deps-dev): bump requests from 2.28.2 to 2.31.0 in /e2e: Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>β οΈ Added support for urllib3 2.0. β οΈ</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.28.2...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-05-23T05:55:56Z | 2023-05-23T09:04:49Z | 2023-05-23T09:01:32Z | dependabot[bot] |
1,721,232,025 | Dataset Viewer issue for vishnun/NLP-KnowledgeGraph | ### Link
https://huggingface.co/datasets/vishnun/NLP-KnowledgeGraph
### Description
The dataset viewer is not working for dataset vishnun/NLP-KnowledgeGraph.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='vishnun/NLP-KnowledgeGraph' config=None split=None---Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config_names.py", line 99, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1192, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 825, in get_module
dataset_readme_path = cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
| Dataset Viewer issue for vishnun/NLP-KnowledgeGraph: ### Link
https://huggingface.co/datasets/vishnun/NLP-KnowledgeGraph
### Description
The dataset viewer is not working for dataset vishnun/NLP-KnowledgeGraph.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='vishnun/NLP-KnowledgeGraph' config=None split=None---Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config_names.py", line 99, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1192, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 825, in get_module
dataset_readme_path = cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
| closed | 2023-05-23T05:42:27Z | 2023-05-30T07:26:54Z | 2023-05-30T07:26:54Z | MangoFF |
1,721,112,866 | Dataset Viewer issue for shibing624/medical | ### Link
https://huggingface.co/datasets/shibing624/medical
### Description
The dataset viewer is not working for dataset shibing624/medical.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for shibing624/medical: ### Link
https://huggingface.co/datasets/shibing624/medical
### Description
The dataset viewer is not working for dataset shibing624/medical.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-23T03:54:00Z | 2023-05-23T16:50:51Z | 2023-05-23T16:50:51Z | shibing624 |
1,720,883,768 | Dataset Viewer issue for beskrovnykh/daniel-dataset-fragments | ### Link
https://huggingface.co/datasets/beskrovnykh/daniel-dataset-fragments
### Description
The dataset viewer is not working for dataset beskrovnykh/daniel-dataset-fragments.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for beskrovnykh/daniel-dataset-fragments: ### Link
https://huggingface.co/datasets/beskrovnykh/daniel-dataset-fragments
### Description
The dataset viewer is not working for dataset beskrovnykh/daniel-dataset-fragments.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-23T00:12:58Z | 2023-05-23T19:17:03Z | 2023-05-23T19:17:02Z | beskrovnykh |
1,720,558,896 | feat: πΈ write cache + backfill only if job finished as expected | ie: if it has been cancelled, we ignore it. See previous work at https://github.com/huggingface/datasets-server/pull/1188. Note that after #1222, the number of warnings "...has a non-empty finished_at field..." has fallen to 26 logs among 20,000, while it was like 20% of the logs before!
Also:
- upgrade `requests`(vulnerability, fixes the CI)
- wait after the backfill to finish the job (the backfill should finish it anyway) | feat: πΈ write cache + backfill only if job finished as expected: ie: if it has been cancelled, we ignore it. See previous work at https://github.com/huggingface/datasets-server/pull/1188. Note that after #1222, the number of warnings "...has a non-empty finished_at field..." has fallen to 26 logs among 20,000, while it was like 20% of the logs before!
Also:
- upgrade `requests`(vulnerability, fixes the CI)
- wait after the backfill to finish the job (the backfill should finish it anyway) | closed | 2023-05-22T21:10:51Z | 2023-05-23T09:03:33Z | 2023-05-23T09:00:35Z | severo |
1,720,137,557 | Rename /split-names-from-dataset-info | part of https://github.com/huggingface/datasets-server/issues/1086 | Rename /split-names-from-dataset-info: part of https://github.com/huggingface/datasets-server/issues/1086 | closed | 2023-05-22T17:37:26Z | 2023-05-23T18:39:59Z | 2023-05-23T18:36:48Z | polinaeterna |
1,720,041,617 | Add parquet metadata to api chart | forgot it in #1214 | Add parquet metadata to api chart: forgot it in #1214 | closed | 2023-05-22T16:38:13Z | 2023-05-22T16:42:02Z | 2023-05-22T16:38:56Z | lhoestq |
1,719,828,595 | Dataset Viewer issue for kietzmannlab/ecoset | ### Link
https://huggingface.co/datasets/kietzmannlab/ecoset
### Description
The dataset viewer is not working for dataset kietzmannlab/ecoset.
Error details:
```
Error code: ConfigNamesError
Exception: ImportError
Message: To be able to use kietzmannlab/ecoset, you need to install the following dependencies: boto3, botocore.
Please install them using 'pip install boto3 botocore' for instance.
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='kietzmannlab/ecoset' config=None split=None---Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise e1 from None
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module
local_imports = _download_additional_modules(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 221, in _download_additional_modules
raise ImportError(
ImportError: To be able to use kietzmannlab/ecoset, you need to install the following dependencies: boto3, botocore.
Please install them using 'pip install boto3 botocore' for instance.
```
| Dataset Viewer issue for kietzmannlab/ecoset: ### Link
https://huggingface.co/datasets/kietzmannlab/ecoset
### Description
The dataset viewer is not working for dataset kietzmannlab/ecoset.
Error details:
```
Error code: ConfigNamesError
Exception: ImportError
Message: To be able to use kietzmannlab/ecoset, you need to install the following dependencies: boto3, botocore.
Please install them using 'pip install boto3 botocore' for instance.
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='kietzmannlab/ecoset' config=None split=None---Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise e1 from None
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module
local_imports = _download_additional_modules(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 221, in _download_additional_modules
raise ImportError(
ImportError: To be able to use kietzmannlab/ecoset, you need to install the following dependencies: boto3, botocore.
Please install them using 'pip install boto3 botocore' for instance.
```
| closed | 2023-05-22T14:34:12Z | 2024-02-02T17:09:59Z | 2024-02-02T17:09:58Z | v-bosch |
1,719,689,350 | refactor: π‘ do only one request to get jobs in DatasetState | null | refactor: π‘ do only one request to get jobs in DatasetState: | closed | 2023-05-22T13:23:02Z | 2023-05-22T18:38:45Z | 2023-05-22T18:36:09Z | severo |
1,719,461,882 | fix: π backfill the dataset after finishing the job | null | fix: π backfill the dataset after finishing the job: | closed | 2023-05-22T11:10:41Z | 2023-05-22T11:23:31Z | 2023-05-22T11:20:36Z | severo |
1,719,317,766 | fix: π if a step depend on parallel steps, both must be used | otherwise, the "error" "Response has already been computed and stored in cache kind: split-first-rows-from-parquet. Compute will be skipped" is propagated, instead of using the other cache entry as it was meant to.
Unfortunately, we will have to relaunch a lot of jobs | fix: π if a step depend on parallel steps, both must be used: otherwise, the "error" "Response has already been computed and stored in cache kind: split-first-rows-from-parquet. Compute will be skipped" is propagated, instead of using the other cache entry as it was meant to.
Unfortunately, we will have to relaunch a lot of jobs | closed | 2023-05-22T09:45:57Z | 2023-05-22T10:25:15Z | 2023-05-22T10:22:31Z | severo |
1,719,201,163 | feat: πΈ tweak queue parameters to flush quick jobs | null | feat: πΈ tweak queue parameters to flush quick jobs: | closed | 2023-05-22T08:44:44Z | 2023-05-22T08:49:19Z | 2023-05-22T08:46:15Z | severo |
1,719,167,405 | Update old cache entries automatically | Some cache entries are very old.
For example, the following entry has been computed more than 3 months ago, and contain an error_code that is no more present in the codebase. It should have been recomputed at some point, but we didn't for some reason:
<img width="1652" alt="Capture dβeΜcran 2023-05-22 aΜ 10 22 29" src="https://github.com/huggingface/datasets-server/assets/1676121/c6cea7a1-ebc6-4431-8eb2-5859209d7854">
| Update old cache entries automatically: Some cache entries are very old.
For example, the following entry has been computed more than 3 months ago, and contain an error_code that is no more present in the codebase. It should have been recomputed at some point, but we didn't for some reason:
<img width="1652" alt="Capture dβeΜcran 2023-05-22 aΜ 10 22 29" src="https://github.com/huggingface/datasets-server/assets/1676121/c6cea7a1-ebc6-4431-8eb2-5859209d7854">
| closed | 2023-05-22T08:23:51Z | 2024-01-09T15:45:16Z | 2024-01-09T15:45:16Z | severo |
1,719,147,715 | Dataset Viewer issue for nyuuzyou/AnimeHeadsv3 | **Link**
https://huggingface.co/datasets/nyuuzyou/AnimeHeadsv3
**Description**
Currently, when attempting to view the dataset using the provided viewer, I am encountering the following error:
```
ERROR: type should be image, got {"src": "https://datasets-server.huggingface.co/assets/nyuuzyou/AnimeHeadsv3/--/With augmentation/train/0/image/image.jpg", "height": 360, "width": 640}
```
Initially, I thought there might be a mistake in the dataset loader configuration. However, even when I used `np.zeros(shape=(16, 16, 3), dtype=np.uint8)` as a placeholder for the images in the dataset loader, the error still persists.
I believe this issue is similar to the one reported in the GitHub repository for the Hugging Face Datasets Server, specifically in issue #1137 (https://github.com/huggingface/datasets-server/issues/1137). The previous issue mentioned a similar problem with image loading.
Please let me know if there is any additional information or assistance I can provide. | Dataset Viewer issue for nyuuzyou/AnimeHeadsv3: **Link**
https://huggingface.co/datasets/nyuuzyou/AnimeHeadsv3
**Description**
Currently, when attempting to view the dataset using the provided viewer, I am encountering the following error:
```
ERROR: type should be image, got {"src": "https://datasets-server.huggingface.co/assets/nyuuzyou/AnimeHeadsv3/--/With augmentation/train/0/image/image.jpg", "height": 360, "width": 640}
```
Initially, I thought there might be a mistake in the dataset loader configuration. However, even when I used `np.zeros(shape=(16, 16, 3), dtype=np.uint8)` as a placeholder for the images in the dataset loader, the error still persists.
I believe this issue is similar to the one reported in the GitHub repository for the Hugging Face Datasets Server, specifically in issue #1137 (https://github.com/huggingface/datasets-server/issues/1137). The previous issue mentioned a similar problem with image loading.
Please let me know if there is any additional information or assistance I can provide. | closed | 2023-05-22T08:13:12Z | 2023-06-26T18:58:47Z | 2023-06-26T18:58:46Z | nyuuzyou |
1,719,098,572 | feat: πΈ delete metrics for /split-names-from-streaming | we missed this migration, which leads to still having 5,000 pending jobs for this step in the Grafana charts, while these jobs don't exist now. | feat: πΈ delete metrics for /split-names-from-streaming: we missed this migration, which leads to still having 5,000 pending jobs for this step in the Grafana charts, while these jobs don't exist now. | closed | 2023-05-22T07:46:35Z | 2023-05-22T07:51:37Z | 2023-05-22T07:48:52Z | severo |
1,718,973,689 | Dataset Viewer issue for ceval/ceval-exam | ### Link
https://huggingface.co/datasets/ceval/ceval-exam
### Description
The dataset viewer is not working for dataset ceval/ceval-exam.
Error details:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/services/worker/ceval/ceval-exam/ceval-exam.py or any data file in the same directory. Couldn't find 'ceval/ceval-exam' on the Hugging Face Hub either: FileNotFoundError: [Errno 2] No such file or directory: '/datasets-server-cache/all/datasets/2023-05-18-14-02-51--config-names-ceval-ceval-exam-978bb85a/downloads/7a60e8ce0ce0606a529d46c365947855cbb11c87fdf815889826fb1c727b54f1.7dae390f456cff1d62f338fbf3c7dcdbc9c8ab289a0c591434fadb67af7702dd.py'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/ceval/ceval-exam/ceval-exam.py or any data file in the same directory. Couldn't find 'ceval/ceval-exam' on the Hugging Face Hub either: FileNotFoundError: [Errno 2] No such file or directory: '/datasets-server-cache/all/datasets/2023-05-18-14-02-51--config-names-ceval-ceval-exam-978bb85a/downloads/7a60e8ce0ce0606a529d46c365947855cbb11c87fdf815889826fb1c727b54f1.7dae390f456cff1d62f338fbf3c7dcdbc9c8ab289a0c591434fadb67af7702dd.py'
```
| Dataset Viewer issue for ceval/ceval-exam: ### Link
https://huggingface.co/datasets/ceval/ceval-exam
### Description
The dataset viewer is not working for dataset ceval/ceval-exam.
Error details:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/services/worker/ceval/ceval-exam/ceval-exam.py or any data file in the same directory. Couldn't find 'ceval/ceval-exam' on the Hugging Face Hub either: FileNotFoundError: [Errno 2] No such file or directory: '/datasets-server-cache/all/datasets/2023-05-18-14-02-51--config-names-ceval-ceval-exam-978bb85a/downloads/7a60e8ce0ce0606a529d46c365947855cbb11c87fdf815889826fb1c727b54f1.7dae390f456cff1d62f338fbf3c7dcdbc9c8ab289a0c591434fadb67af7702dd.py'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/ceval/ceval-exam/ceval-exam.py or any data file in the same directory. Couldn't find 'ceval/ceval-exam' on the Hugging Face Hub either: FileNotFoundError: [Errno 2] No such file or directory: '/datasets-server-cache/all/datasets/2023-05-18-14-02-51--config-names-ceval-ceval-exam-978bb85a/downloads/7a60e8ce0ce0606a529d46c365947855cbb11c87fdf815889826fb1c727b54f1.7dae390f456cff1d62f338fbf3c7dcdbc9c8ab289a0c591434fadb67af7702dd.py'
```
| closed | 2023-05-22T06:17:48Z | 2023-05-22T07:39:49Z | 2023-05-22T07:39:49Z | jxhe |
1,718,577,854 | Dataset Viewer issue for AntonioRenatoMontefusco/kddChallenge2023 | ### Link
https://huggingface.co/datasets/AntonioRenatoMontefusco/kddChallenge2023
### Description
The dataset viewer is not working for dataset AntonioRenatoMontefusco/kddChallenge2023.
Error details:
```
Error code: JobManagerCrashedError
```
| Dataset Viewer issue for AntonioRenatoMontefusco/kddChallenge2023: ### Link
https://huggingface.co/datasets/AntonioRenatoMontefusco/kddChallenge2023
### Description
The dataset viewer is not working for dataset AntonioRenatoMontefusco/kddChallenge2023.
Error details:
```
Error code: JobManagerCrashedError
```
| closed | 2023-05-21T17:24:16Z | 2023-05-23T08:29:44Z | 2023-05-23T08:29:43Z | AntonioRenatoMontefusco |
1,718,189,053 | Use parquet metadata in /rows | Step 2 of https://github.com/huggingface/datasets-server/issues/1186
## Implementation details
I implemented ParquetIndexWithMetadata (new) and ParquetIndexWithoutMetadata (from the existing code).:
- ParquetIndexWithMetadata is used when `config-parquet-metadata` is cached and is fast
- ParquetIndexWithoutMetadata is the old code, that runs when `config-parquet-metadata` is not available yet
I think in the long run we should remove ParquetIndexWithoutMetadata. I just kept it so that the UI doesn't show an error when `config-parquet-metadata` is not available.
I re-added support for image and audio for the ParquetIndexWithMetadata | Use parquet metadata in /rows: Step 2 of https://github.com/huggingface/datasets-server/issues/1186
## Implementation details
I implemented ParquetIndexWithMetadata (new) and ParquetIndexWithoutMetadata (from the existing code).:
- ParquetIndexWithMetadata is used when `config-parquet-metadata` is cached and is fast
- ParquetIndexWithoutMetadata is the old code, that runs when `config-parquet-metadata` is not available yet
I think in the long run we should remove ParquetIndexWithoutMetadata. I just kept it so that the UI doesn't show an error when `config-parquet-metadata` is not available.
I re-added support for image and audio for the ParquetIndexWithMetadata | closed | 2023-05-20T14:30:24Z | 2023-05-22T16:27:28Z | 2023-05-22T16:24:39Z | lhoestq |
1,717,744,192 | feat: Part #3: Adding "partition" granularity level logic | First part of approach # 2 of https://github.com/huggingface/datasets-server/issues/1087
Adding a new granularity level - "Partition", will imply to also have a new field in Job and Cache.
Depends on https://github.com/huggingface/datasets-server/pull/1263, https://github.com/huggingface/datasets-server/pull/1259
and https://github.com/huggingface/datasets-server/pull/1260 | feat: Part #3: Adding "partition" granularity level logic: First part of approach # 2 of https://github.com/huggingface/datasets-server/issues/1087
Adding a new granularity level - "Partition", will imply to also have a new field in Job and Cache.
Depends on https://github.com/huggingface/datasets-server/pull/1263, https://github.com/huggingface/datasets-server/pull/1259
and https://github.com/huggingface/datasets-server/pull/1260 | closed | 2023-05-19T19:49:58Z | 2023-10-10T13:29:38Z | 2023-06-01T14:37:28Z | AndreaFrancis |
1,717,723,316 | Dataset Viewer issue for tarteel-ai/everyayah | ### Link
https://huggingface.co/datasets/tarteel-ai/everyayah
### Description
The dataset viewer is not working for dataset tarteel-ai/everyayah.
Error details:
```
Error code: JobRunnerCrashedError
```
| Dataset Viewer issue for tarteel-ai/everyayah: ### Link
https://huggingface.co/datasets/tarteel-ai/everyayah
### Description
The dataset viewer is not working for dataset tarteel-ai/everyayah.
Error details:
```
Error code: JobRunnerCrashedError
```
| closed | 2023-05-19T19:27:22Z | 2023-05-23T08:42:51Z | 2023-05-23T08:42:51Z | manna1lix |
1,717,480,516 | feat: πΈ add logs to the migrations | null | feat: πΈ add logs to the migrations: | closed | 2023-05-19T16:03:14Z | 2023-05-19T16:08:54Z | 2023-05-19T16:06:16Z | severo |
1,717,450,689 | fix: π missing refactoring in the last merge | null | fix: π missing refactoring in the last merge: | closed | 2023-05-19T15:40:58Z | 2023-05-19T15:58:43Z | 2023-05-19T15:56:00Z | severo |
1,717,434,436 | A lot of jobs finish with Warning: ... has a non-empty finished_at field. Force finishing anyway | See https://github.com/huggingface/datasets-server/pull/1203#issuecomment-1554544553
Started jobs should have an empty finished_at field. | A lot of jobs finish with Warning: ... has a non-empty finished_at field. Force finishing anyway: See https://github.com/huggingface/datasets-server/pull/1203#issuecomment-1554544553
Started jobs should have an empty finished_at field. | closed | 2023-05-19T15:27:33Z | 2023-08-11T15:27:18Z | 2023-08-11T15:27:17Z | severo |
1,717,389,689 | chore: π€ ignore a vulnerability for now | null | chore: π€ ignore a vulnerability for now: | closed | 2023-05-19T14:57:19Z | 2023-05-19T15:13:41Z | 2023-05-19T15:10:42Z | severo |
1,717,374,182 | refactor: π‘ only pass is_success to finish_job | so that the caller does not have to know the queue job statuses.
Also: finish_job returns a boolean to say if it was in an expected state. | refactor: π‘ only pass is_success to finish_job: so that the caller does not have to know the queue job statuses.
Also: finish_job returns a boolean to say if it was in an expected state. | closed | 2023-05-19T14:45:48Z | 2023-05-19T15:31:24Z | 2023-05-19T15:28:34Z | severo |
1,717,357,015 | refactor: π‘ remove two methods | null | refactor: π‘ remove two methods: | closed | 2023-05-19T14:34:11Z | 2023-05-19T15:33:58Z | 2023-05-19T15:31:10Z | severo |
1,717,320,407 | fix: π the started jobinfo always contained priority=NORMAL | Now we get the value as expected. This means that the backfill function will create jobs at the same level of priority, instead of moving everything to the NORMAL priority queue | fix: π the started jobinfo always contained priority=NORMAL: Now we get the value as expected. This means that the backfill function will create jobs at the same level of priority, instead of moving everything to the NORMAL priority queue | closed | 2023-05-19T14:09:55Z | 2023-05-19T14:36:04Z | 2023-05-19T14:32:35Z | severo |
1,717,308,966 | Update transformers for pip audit | null | Update transformers for pip audit: | closed | 2023-05-19T14:04:58Z | 2023-05-19T15:00:55Z | 2023-05-19T14:58:06Z | lhoestq |
1,717,139,265 | Again: ignore result of job runner if job has been canceled | First PR: #1188
Reverted by #1196
New try. First, I get the code again, and then will commit the fix, once I find the issue.
| Again: ignore result of job runner if job has been canceled: First PR: #1188
Reverted by #1196
New try. First, I get the code again, and then will commit the fix, once I find the issue.
| closed | 2023-05-19T12:03:53Z | 2024-01-26T09:01:34Z | 2023-05-19T13:52:37Z | severo |
1,716,948,398 | Dataset Viewer issue for phamson02/vietnamese-poetry-corpus | ### Link
https://huggingface.co/datasets/phamson02/vietnamese-poetry-corpus
### Description
The dataset viewer is not working for dataset phamson02/vietnamese-poetry-corpus.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1196, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 829, in get_module
dataset_readme_path = cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
| Dataset Viewer issue for phamson02/vietnamese-poetry-corpus: ### Link
https://huggingface.co/datasets/phamson02/vietnamese-poetry-corpus
### Description
The dataset viewer is not working for dataset phamson02/vietnamese-poetry-corpus.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1196, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 829, in get_module
dataset_readme_path = cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
| closed | 2023-05-19T09:38:54Z | 2023-05-22T06:52:50Z | 2023-05-22T06:52:50Z | phamson02 |
1,716,133,975 | Dedicated worker for split-opt-in-out-urls-scan | null | Dedicated worker for split-opt-in-out-urls-scan: | closed | 2023-05-18T19:12:23Z | 2023-05-18T19:19:22Z | 2023-05-18T19:16:44Z | AndreaFrancis |
1,716,105,921 | Temporaly adding a dedicated worker for config/dataset-opt-in-out-urls-count | null | Temporaly adding a dedicated worker for config/dataset-opt-in-out-urls-count: | closed | 2023-05-18T18:49:38Z | 2023-05-18T19:00:33Z | 2023-05-18T18:57:11Z | AndreaFrancis |
1,716,091,421 | Descriptive statistics | This PR introduces the following measurements/statistics:
### numerical columns (float and int):
- nan values count
- nan values percentage
- min
- max
- mean
- median
- std
- histogram:
- for float: fixed number of bins (which is a global config parameter - tell me if it's an overkill :D)
- for integers: bin size is always an integer value. if difference between max and min is less then fixed number of bins parameter, bin size is equal to 1, otherwise it's `round((max-min)/num_bins)` - which means that bins might be not equal (like the last one being smaller) but otherwise there could be float bin edges which is nonsense.
### categorical columns
These are **only** `ClassLabel` columns which is not ideal because some integers also might be categories, as well as strings, but we can't be sure in advance.
- nan values count
- nan values percentage
- number of unique values
- counts for each value
<details><summary>Here's the result for my toy dataset with 1000 rows </summary>
<p>
[polinaeterna/delays_nans](https://huggingface.co/datasets/polinaeterna/delays_nans)
```python
{
"num_examples": 100000,
"statistics": [
{
"column_name": "class_col",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 5,
"frequencies": {
"this": 19834,
"are": 20159,
"random": 20109,
"words": 20172,
"test": 19726
}
}
},
{
"column_name": "class_col_nans",
"column_type": "class_label",
"column_statistics": {
"nan_count": 49972,
"nan_proportion": 0.49972,
"n_unique": 5,
"frequencies": {
"this": 9904,
"are": 10021,
"random": 10126,
"words": 10061,
"test": 9916
}
}
},
{
"column_name": "delay",
"column_type": "float",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": -10.206,
"max": 8.48053,
"mean": 2.10174,
"median": 3.4012,
"std": 3.12487,
"histogram": {
"hist": [
2,
34,
256,
15198,
9037,
2342,
12743,
45114,
14904,
370
],
"bin_edges": [
-10.206,
-8.33734,
-6.46869,
-4.60004,
-2.73139,
-0.86273,
1.00592,
2.87457,
4.74322,
6.61188,
8.48053
]
}
}
},
{
"column_name": "delay_nans",
"column_type": "float",
"column_statistics": {
"nan_count": 49892,
"nan_proportion": 0.49892,
"min": -10.206,
"max": 8.48053,
"mean": 2.11288,
"median": 3.4012,
"std": 3.11722,
"histogram": {
"hist": [
1,
17,
137,
7515,
4522,
1197,
6481,
22593,
7473,
172
],
"bin_edges": [
-10.206,
-8.33734,
-6.46869,
-4.60004,
-2.73139,
-0.86273,
1.00592,
2.87457,
4.74322,
6.61188,
8.48053
]
}
}
},
{
"column_name": "temp",
"column_type": "float",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0.6,
"max": 14.9,
"mean": 7.28953,
"median": 7.5,
"std": 3.05441,
"histogram": {
"hist": [
4711,
9781,
10349,
9781,
20166,
17046,
13965,
8794,
4055,
1352
],
"bin_edges": [
0.6,
2.03,
3.46,
4.89,
6.32,
7.75,
9.18,
10.61,
12.04,
13.47,
14.9
]
}
}
},
{
"column_name": "temp_nans",
"column_type": "float",
"column_statistics": {
"nan_count": 49959,
"nan_proportion": 0.49959,
"min": 0.6,
"max": 14.9,
"mean": 7.29404,
"median": 7.5,
"std": 3.05107,
"histogram": {
"hist": [
2392,
4855,
5143,
4879,
10040,
8571,
7058,
4452,
2014,
637
],
"bin_edges": [
0.6,
2.03,
3.46,
4.89,
6.32,
7.75,
9.18,
10.61,
12.04,
13.47,
14.9
]
}
}
},
{
"column_name": "vehicle_type",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 2,
"mean": 0.82646,
"median": 1.0,
"std": 0.72333,
"histogram": {
"hist": [
36343,
44668,
18989
],
"bin_edges": [
0,
1,
2,
2
]
}
}
},
{
"column_name": "vehicle_type_nans",
"column_type": "int",
"column_statistics": {
"nan_count": 50247,
"nan_proportion": 0.50247,
"min": 0,
"max": 2,
"mean": 0.82542,
"median": 1.0,
"std": 0.72384,
"histogram": {
"hist": [
18135,
22169,
9449
],
"bin_edges": [
0,
1,
2,
2
]
}
}
},
{
"column_name": "weekday",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 6,
"mean": 3.08063,
"median": 3.0,
"std": 1.90347,
"histogram": {
"hist": [
10282,
15416,
15291,
15201,
15586,
15226,
12998
],
"bin_edges": [
0,
1,
2,
3,
4,
5,
6,
6
]
}
}
},
{
"column_name": "weekday_nans",
"column_type": "int",
"column_statistics": {
"nan_count": 50065,
"nan_proportion": 0.50065,
"min": 0,
"max": 6,
"mean": 3.07762,
"median": 3.0,
"std": 1.90272,
"histogram": {
"hist": [
5136,
7695,
7711,
7550,
7749,
7646,
6448
],
"bin_edges": [
0,
1,
2,
3,
4,
5,
6,
6
]
}
}
}
]
}
```
</p>
</details> | Descriptive statistics: This PR introduces the following measurements/statistics:
### numerical columns (float and int):
- nan values count
- nan values percentage
- min
- max
- mean
- median
- std
- histogram:
- for float: fixed number of bins (which is a global config parameter - tell me if it's an overkill :D)
- for integers: bin size is always an integer value. if difference between max and min is less then fixed number of bins parameter, bin size is equal to 1, otherwise it's `round((max-min)/num_bins)` - which means that bins might be not equal (like the last one being smaller) but otherwise there could be float bin edges which is nonsense.
### categorical columns
These are **only** `ClassLabel` columns which is not ideal because some integers also might be categories, as well as strings, but we can't be sure in advance.
- nan values count
- nan values percentage
- number of unique values
- counts for each value
<details><summary>Here's the result for my toy dataset with 1000 rows </summary>
<p>
[polinaeterna/delays_nans](https://huggingface.co/datasets/polinaeterna/delays_nans)
```python
{
"num_examples": 100000,
"statistics": [
{
"column_name": "class_col",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 5,
"frequencies": {
"this": 19834,
"are": 20159,
"random": 20109,
"words": 20172,
"test": 19726
}
}
},
{
"column_name": "class_col_nans",
"column_type": "class_label",
"column_statistics": {
"nan_count": 49972,
"nan_proportion": 0.49972,
"n_unique": 5,
"frequencies": {
"this": 9904,
"are": 10021,
"random": 10126,
"words": 10061,
"test": 9916
}
}
},
{
"column_name": "delay",
"column_type": "float",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": -10.206,
"max": 8.48053,
"mean": 2.10174,
"median": 3.4012,
"std": 3.12487,
"histogram": {
"hist": [
2,
34,
256,
15198,
9037,
2342,
12743,
45114,
14904,
370
],
"bin_edges": [
-10.206,
-8.33734,
-6.46869,
-4.60004,
-2.73139,
-0.86273,
1.00592,
2.87457,
4.74322,
6.61188,
8.48053
]
}
}
},
{
"column_name": "delay_nans",
"column_type": "float",
"column_statistics": {
"nan_count": 49892,
"nan_proportion": 0.49892,
"min": -10.206,
"max": 8.48053,
"mean": 2.11288,
"median": 3.4012,
"std": 3.11722,
"histogram": {
"hist": [
1,
17,
137,
7515,
4522,
1197,
6481,
22593,
7473,
172
],
"bin_edges": [
-10.206,
-8.33734,
-6.46869,
-4.60004,
-2.73139,
-0.86273,
1.00592,
2.87457,
4.74322,
6.61188,
8.48053
]
}
}
},
{
"column_name": "temp",
"column_type": "float",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0.6,
"max": 14.9,
"mean": 7.28953,
"median": 7.5,
"std": 3.05441,
"histogram": {
"hist": [
4711,
9781,
10349,
9781,
20166,
17046,
13965,
8794,
4055,
1352
],
"bin_edges": [
0.6,
2.03,
3.46,
4.89,
6.32,
7.75,
9.18,
10.61,
12.04,
13.47,
14.9
]
}
}
},
{
"column_name": "temp_nans",
"column_type": "float",
"column_statistics": {
"nan_count": 49959,
"nan_proportion": 0.49959,
"min": 0.6,
"max": 14.9,
"mean": 7.29404,
"median": 7.5,
"std": 3.05107,
"histogram": {
"hist": [
2392,
4855,
5143,
4879,
10040,
8571,
7058,
4452,
2014,
637
],
"bin_edges": [
0.6,
2.03,
3.46,
4.89,
6.32,
7.75,
9.18,
10.61,
12.04,
13.47,
14.9
]
}
}
},
{
"column_name": "vehicle_type",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 2,
"mean": 0.82646,
"median": 1.0,
"std": 0.72333,
"histogram": {
"hist": [
36343,
44668,
18989
],
"bin_edges": [
0,
1,
2,
2
]
}
}
},
{
"column_name": "vehicle_type_nans",
"column_type": "int",
"column_statistics": {
"nan_count": 50247,
"nan_proportion": 0.50247,
"min": 0,
"max": 2,
"mean": 0.82542,
"median": 1.0,
"std": 0.72384,
"histogram": {
"hist": [
18135,
22169,
9449
],
"bin_edges": [
0,
1,
2,
2
]
}
}
},
{
"column_name": "weekday",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 6,
"mean": 3.08063,
"median": 3.0,
"std": 1.90347,
"histogram": {
"hist": [
10282,
15416,
15291,
15201,
15586,
15226,
12998
],
"bin_edges": [
0,
1,
2,
3,
4,
5,
6,
6
]
}
}
},
{
"column_name": "weekday_nans",
"column_type": "int",
"column_statistics": {
"nan_count": 50065,
"nan_proportion": 0.50065,
"min": 0,
"max": 6,
"mean": 3.07762,
"median": 3.0,
"std": 1.90272,
"histogram": {
"hist": [
5136,
7695,
7711,
7550,
7749,
7646,
6448
],
"bin_edges": [
0,
1,
2,
3,
4,
5,
6,
6
]
}
}
}
]
}
```
</p>
</details> | closed | 2023-05-18T18:36:25Z | 2023-07-27T15:56:48Z | 2023-07-27T15:51:05Z | polinaeterna |
1,716,035,065 | disable prod backfill for now | the opt-in-out-urls jobs are filling up the job queue faster that it's being emptied, leading to 300k+ waiting jobs | disable prod backfill for now: the opt-in-out-urls jobs are filling up the job queue faster that it's being emptied, leading to 300k+ waiting jobs | closed | 2023-05-18T17:56:39Z | 2023-05-19T08:36:08Z | 2023-05-18T18:50:07Z | lhoestq |
1,716,005,973 | Set datetime types in admin ui | to fix errors when duckdb tries to cast the columns like "started_at"
(already deployed on HF - I ran my tests there ^^') | Set datetime types in admin ui: to fix errors when duckdb tries to cast the columns like "started_at"
(already deployed on HF - I ran my tests there ^^') | closed | 2023-05-18T17:34:26Z | 2023-05-19T11:46:59Z | 2023-05-19T11:43:30Z | lhoestq |
1,715,648,909 | Revert "feat: πΈ ignore result of job runner if job has been canceled β¦ | β¦(#1188)"
This reverts commit a85b08697399a06dc2a98539dd4b9679cf6da8be.
For some reason the queue stopped picking jobs after the deploy that included this change in queue.py | Revert "feat: πΈ ignore result of job runner if job has been canceled β¦: β¦(#1188)"
This reverts commit a85b08697399a06dc2a98539dd4b9679cf6da8be.
For some reason the queue stopped picking jobs after the deploy that included this change in queue.py | closed | 2023-05-18T13:32:38Z | 2023-05-18T13:46:39Z | 2023-05-18T13:43:50Z | lhoestq |
1,715,554,211 | Dataset Viewer issue for under-tree/prepared-yagpt | ### Link
https://huggingface.co/datasets/under-tree/prepared-yagpt
### Description
The dataset viewer is not working for dataset under-tree/prepared-yagpt.
Error details:
Dataset was pushed in next way
```python
final_dataset.push_to_hub(checkpoint)
```
where final_dataset is DatasetDict object
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for under-tree/prepared-yagpt: ### Link
https://huggingface.co/datasets/under-tree/prepared-yagpt
### Description
The dataset viewer is not working for dataset under-tree/prepared-yagpt.
Error details:
Dataset was pushed in next way
```python
final_dataset.push_to_hub(checkpoint)
```
where final_dataset is DatasetDict object
```
Error code: ResponseNotReady
```
| closed | 2023-05-18T12:29:09Z | 2023-05-19T08:34:58Z | 2023-05-19T08:34:58Z | RodionfromHSE |
1,715,476,279 | Dataset Viewer issue for Fredithefish/GPTeacher-for-RedPajama-Chat | ### Link
https://huggingface.co/datasets/Fredithefish/GPTeacher-for-RedPajama-Chat
### Description
The dataset viewer is not working for dataset Fredithefish/GPTeacher-for-RedPajama-Chat.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for Fredithefish/GPTeacher-for-RedPajama-Chat: ### Link
https://huggingface.co/datasets/Fredithefish/GPTeacher-for-RedPajama-Chat
### Description
The dataset viewer is not working for dataset Fredithefish/GPTeacher-for-RedPajama-Chat.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-18T11:29:37Z | 2023-05-19T08:35:31Z | 2023-05-19T08:35:31Z | fredi-python |
1,714,920,021 | Dataset Viewer issue for RengJEY/Fast_Food_classification | ### Link
https://huggingface.co/datasets/RengJEY/Fast_Food_classification
### Description
The dataset viewer is not working for dataset RengJEY/Fast_Food_classification.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for RengJEY/Fast_Food_classification: ### Link
https://huggingface.co/datasets/RengJEY/Fast_Food_classification
### Description
The dataset viewer is not working for dataset RengJEY/Fast_Food_classification.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-18T03:20:35Z | 2023-05-18T05:25:04Z | 2023-05-18T05:24:48Z | RENGJEY |
1,713,811,938 | Don't return an error on /first-rows (or later: /rows) if one image is failing | See https://huggingface.co/datasets/datadrivenscience/ship-detection
<img width="1034" alt="Capture dβeΜcran 2023-05-17 aΜ 14 33 42" src="https://github.com/huggingface/datasets-server/assets/1676121/9e12612f-bb42-4460-8c7d-91fc75534518">
```
Error code: StreamingRowsError
Exception: DecompressionBombError
Message: Image size (806504000 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 327, in get_rows_or_raise
return get_rows(
File "/src/services/worker/src/worker/utils.py", line 271, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 307, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 941, in __iter__
yield _apply_feature_types_on_example(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 700, in _apply_feature_types_on_example
decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1864, in decode_example
return {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1865, in <dictcomp>
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1308, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 175, in decode_example
image = PIL.Image.open(bytes_)
File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3268, in open
im = _open_core(fp, filename, prefix, formats)
File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3255, in _open_core
_decompression_bomb_check(im.size)
File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3164, in _decompression_bomb_check
raise DecompressionBombError(msg)
PIL.Image.DecompressionBombError: Image size (806504000 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack.
```
One image is too big to be processed (or maybe some images), but the other images are smaller. We should return the response to `/first-rows`, but have a way to indicate that some of the cells have an error (as we already have `truncated_cells`) | Don't return an error on /first-rows (or later: /rows) if one image is failing: See https://huggingface.co/datasets/datadrivenscience/ship-detection
<img width="1034" alt="Capture dβeΜcran 2023-05-17 aΜ 14 33 42" src="https://github.com/huggingface/datasets-server/assets/1676121/9e12612f-bb42-4460-8c7d-91fc75534518">
```
Error code: StreamingRowsError
Exception: DecompressionBombError
Message: Image size (806504000 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 327, in get_rows_or_raise
return get_rows(
File "/src/services/worker/src/worker/utils.py", line 271, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 307, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 941, in __iter__
yield _apply_feature_types_on_example(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 700, in _apply_feature_types_on_example
decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1864, in decode_example
return {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1865, in <dictcomp>
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1308, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 175, in decode_example
image = PIL.Image.open(bytes_)
File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3268, in open
im = _open_core(fp, filename, prefix, formats)
File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3255, in _open_core
_decompression_bomb_check(im.size)
File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3164, in _decompression_bomb_check
raise DecompressionBombError(msg)
PIL.Image.DecompressionBombError: Image size (806504000 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack.
```
One image is too big to be processed (or maybe some images), but the other images are smaller. We should return the response to `/first-rows`, but have a way to indicate that some of the cells have an error (as we already have `truncated_cells`) | closed | 2023-05-17T12:35:45Z | 2023-06-14T09:44:17Z | 2023-06-14T09:44:16Z | severo |
1,713,772,619 | Update starlette to 0.27.0 | Fix pip-audit for admin and api
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
--------- ------- ------------------- ------------
starlette 0.25.0 GHSA-v5gw-mw7f-84px 0.27.0
``` | Update starlette to 0.27.0: Fix pip-audit for admin and api
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
--------- ------- ------------------- ------------
starlette 0.25.0 GHSA-v5gw-mw7f-84px 0.27.0
``` | closed | 2023-05-17T12:13:48Z | 2023-05-17T12:40:37Z | 2023-05-17T12:37:43Z | lhoestq |
1,712,421,891 | Cache parquet metadata to optimize /rows | Step 1 of https://github.com/huggingface/datasets-server/issues/1186
I added a new job that gets the parquet metadata of each parquet file and write them on disk in the `assets_directory`.
These metadata will be used to optimize random access to rows, that I will implement in a subsequent PR.
The parquet metadata files are placed in `assets/<dataset>/-pq-meta/<config>/` and have the same filename as the parquet files in `refs/convert/parquet`. I chose `-pq-meta` as a dataset separator because it needs to start with a dash to differentiate with dataset names.
Usually these metadata files are supposed to be grouped into one `_metadata` sidecar file for all the parquet files, but I figured it was easier to have one per parquet file and it requires to load less data when doing a random access.
In the mongodb cache I store the lists of parquet metadata files and their `num_rows`, so that we can know in advance which parquet and metadata file to use. | Cache parquet metadata to optimize /rows: Step 1 of https://github.com/huggingface/datasets-server/issues/1186
I added a new job that gets the parquet metadata of each parquet file and write them on disk in the `assets_directory`.
These metadata will be used to optimize random access to rows, that I will implement in a subsequent PR.
The parquet metadata files are placed in `assets/<dataset>/-pq-meta/<config>/` and have the same filename as the parquet files in `refs/convert/parquet`. I chose `-pq-meta` as a dataset separator because it needs to start with a dash to differentiate with dataset names.
Usually these metadata files are supposed to be grouped into one `_metadata` sidecar file for all the parquet files, but I figured it was easier to have one per parquet file and it requires to load less data when doing a random access.
In the mongodb cache I store the lists of parquet metadata files and their `num_rows`, so that we can know in advance which parquet and metadata file to use. | closed | 2023-05-16T17:17:52Z | 2023-05-19T15:52:21Z | 2023-05-19T14:00:21Z | lhoestq |
1,712,001,601 | feat: πΈ return X-Revision header when possible on endpoints | it will help show the status of the cache entry on the Hub. | feat: πΈ return X-Revision header when possible on endpoints: it will help show the status of the cache entry on the Hub. | closed | 2023-05-16T13:10:27Z | 2023-05-17T16:00:42Z | 2023-05-17T15:57:56Z | severo |
1,711,924,002 | feat: πΈ ignore result of job runner if job has been canceled | also: refactor to remove two queue methods (kill_zombies, kill_long_jobs): job_manager is now in charge of finishing the jobs, and updating the cache (if needed). | feat: πΈ ignore result of job runner if job has been canceled: also: refactor to remove two queue methods (kill_zombies, kill_long_jobs): job_manager is now in charge of finishing the jobs, and updating the cache (if needed). | closed | 2023-05-16T12:27:16Z | 2023-05-22T21:11:39Z | 2023-05-17T15:27:02Z | severo |
1,710,554,790 | Set git revision at job creation | The proposal in the PR is to add a field `revision` to the jobs, at creation, and it must be non-null (it should be the commit hash).
This way, the job runners don't have to reach the hub to check for the current revision, and we're preparing for (one day) handle multiple revisions in the cache for the same dataset.
Meanwhile, the model is the following:
- we cannot have two jobs for the same dataset but with different revisions: if we create a new job with a new revision, the existing ones are canceled.
- the cache contains the revision in the field "dataset_git_revision", and it's still optional for backward compatibility reasons
- there is no semantics about the time or ordering relation between commits: if we create a job for a given revision, it will be used blindly and replace the cache entries even if this is an older commit. | Set git revision at job creation: The proposal in the PR is to add a field `revision` to the jobs, at creation, and it must be non-null (it should be the commit hash).
This way, the job runners don't have to reach the hub to check for the current revision, and we're preparing for (one day) handle multiple revisions in the cache for the same dataset.
Meanwhile, the model is the following:
- we cannot have two jobs for the same dataset but with different revisions: if we create a new job with a new revision, the existing ones are canceled.
- the cache contains the revision in the field "dataset_git_revision", and it's still optional for backward compatibility reasons
- there is no semantics about the time or ordering relation between commits: if we create a job for a given revision, it will be used blindly and replace the cache entries even if this is an older commit. | closed | 2023-05-15T17:57:06Z | 2023-05-17T14:53:58Z | 2023-05-17T14:51:07Z | severo |
1,710,306,468 | Re-enable image and audio in the viewer | The current caching mechanism from #1026 is almost never used:
a. the parquet index is stored in memory per worker and there are too many of them
b. the image/audio files are always recreated
Because of that the viewer was too slow for image and audio datasets and we disabled it in #1144
To fix this issue we could
- [x] 1. store the parquet index on disk using a worker
- [x] 2. use the local parquet index in /rows
- [ ] 3. (optional) avoid reloading images/audio files that are already on disk | Re-enable image and audio in the viewer: The current caching mechanism from #1026 is almost never used:
a. the parquet index is stored in memory per worker and there are too many of them
b. the image/audio files are always recreated
Because of that the viewer was too slow for image and audio datasets and we disabled it in #1144
To fix this issue we could
- [x] 1. store the parquet index on disk using a worker
- [x] 2. use the local parquet index in /rows
- [ ] 3. (optional) avoid reloading images/audio files that are already on disk | closed | 2023-05-15T15:11:50Z | 2023-05-25T15:33:34Z | 2023-05-25T15:33:34Z | lhoestq |
1,709,919,677 | fix: π don't fill truncated_cells w/ unsupported cols on /rows | See
https://github.com/huggingface/moon-landing/pull/6300#issuecomment-1547590109 for reference (internal link). | fix: π don't fill truncated_cells w/ unsupported cols on /rows: See
https://github.com/huggingface/moon-landing/pull/6300#issuecomment-1547590109 for reference (internal link). | closed | 2023-05-15T11:43:24Z | 2023-05-15T12:39:27Z | 2023-05-15T12:36:13Z | severo |
1,708,787,938 | Dataset Viewer issue for annabely/ukiyoe_10_30_control_net | ### Link
https://huggingface.co/datasets/annabely/ukiyoe_10_30_control_net
### Description
The dataset viewer is not working for dataset annabely/ukiyoe_10_30_control_net.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for annabely/ukiyoe_10_30_control_net: ### Link
https://huggingface.co/datasets/annabely/ukiyoe_10_30_control_net
### Description
The dataset viewer is not working for dataset annabely/ukiyoe_10_30_control_net.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-14T01:43:25Z | 2023-05-15T08:19:10Z | 2023-05-15T08:19:10Z | annabelyim |
1,708,565,593 | fix: π hot fix - catch exception on git revision | try to fix #1182 | fix: π hot fix - catch exception on git revision: try to fix #1182 | closed | 2023-05-13T11:25:28Z | 2023-05-15T06:43:27Z | 2023-05-13T11:27:57Z | severo |
1,708,557,188 | The workers fail with `mongoengine.errors.FieldDoesNotExist: The fields "{'force'}" do not exist on the document "Job"` | ```
INFO: 2023-05-13 10:50:18,007 - root - Worker loop started
INFO: 2023-05-13 10:50:18,023 - root - Starting heartbeat.
ERROR: 2023-05-13 10:50:18,115 - asyncio - Task exception was never retrieved
future: <Task finished name='Task-2' coro=<every() done, defined at /src/services/worker/src/worker/executor.py:26> exception=FieldDoesNotExist('The fields "{\'force\'}" do not exist on the document "Job"')>
Traceback (most recent call last):
File "/src/services/worker/src/worker/executor.py", line 30, in every
out = func(*args, **kwargs)
File "/src/services/worker/src/worker/executor.py", line 117, in kill_zombies
zombies = queue.get_zombies(max_seconds_without_heartbeat=self.max_seconds_without_heartbeat_for_zombies)
File "/src/libs/libcommon/src/libcommon/queue.py", line 622, in get_zombies
zombies = [
File "/src/libs/libcommon/src/libcommon/queue.py", line 622, in <listcomp>
zombies = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/queryset.py", line 110, in _iter_results
self._populate_cache()
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/queryset.py", line 129, in _populate_cache
self._result_cache.append(next(self))
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 1599, in __next__
doc = self._document._from_son(
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/base/document.py", line 836, in _from_son
obj = cls(__auto_convert=False, _created=created, **data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/base/document.py", line 99, in __init__
raise FieldDoesNotExist(msg)
mongoengine.errors.FieldDoesNotExist: The fields "{'force'}" do not exist on the document "Job"
Traceback (most recent call last):
File "/src/services/worker/src/worker/main.py", line 61, in <module>
worker_executor.start()
File "/src/services/worker/src/worker/executor.py", line 89, in start
loop.run_until_complete(
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 645, in run_until_complete
raise RuntimeError('Event loop stopped before Future completed.')
RuntimeError: Event loop stopped before Future completed.
```
<img width="622" alt="Capture dβeΜcran 2023-05-13 aΜ 12 52 06" src="https://github.com/huggingface/datasets-server/assets/1676121/3e4d7e1a-dd02-476d-b671-57ce03d28d7b">
| The workers fail with `mongoengine.errors.FieldDoesNotExist: The fields "{'force'}" do not exist on the document "Job"`: ```
INFO: 2023-05-13 10:50:18,007 - root - Worker loop started
INFO: 2023-05-13 10:50:18,023 - root - Starting heartbeat.
ERROR: 2023-05-13 10:50:18,115 - asyncio - Task exception was never retrieved
future: <Task finished name='Task-2' coro=<every() done, defined at /src/services/worker/src/worker/executor.py:26> exception=FieldDoesNotExist('The fields "{\'force\'}" do not exist on the document "Job"')>
Traceback (most recent call last):
File "/src/services/worker/src/worker/executor.py", line 30, in every
out = func(*args, **kwargs)
File "/src/services/worker/src/worker/executor.py", line 117, in kill_zombies
zombies = queue.get_zombies(max_seconds_without_heartbeat=self.max_seconds_without_heartbeat_for_zombies)
File "/src/libs/libcommon/src/libcommon/queue.py", line 622, in get_zombies
zombies = [
File "/src/libs/libcommon/src/libcommon/queue.py", line 622, in <listcomp>
zombies = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/queryset.py", line 110, in _iter_results
self._populate_cache()
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/queryset.py", line 129, in _populate_cache
self._result_cache.append(next(self))
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 1599, in __next__
doc = self._document._from_son(
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/base/document.py", line 836, in _from_son
obj = cls(__auto_convert=False, _created=created, **data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/base/document.py", line 99, in __init__
raise FieldDoesNotExist(msg)
mongoengine.errors.FieldDoesNotExist: The fields "{'force'}" do not exist on the document "Job"
Traceback (most recent call last):
File "/src/services/worker/src/worker/main.py", line 61, in <module>
worker_executor.start()
File "/src/services/worker/src/worker/executor.py", line 89, in start
loop.run_until_complete(
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 645, in run_until_complete
raise RuntimeError('Event loop stopped before Future completed.')
RuntimeError: Event loop stopped before Future completed.
```
<img width="622" alt="Capture dβeΜcran 2023-05-13 aΜ 12 52 06" src="https://github.com/huggingface/datasets-server/assets/1676121/3e4d7e1a-dd02-476d-b671-57ce03d28d7b">
| closed | 2023-05-13T10:52:20Z | 2023-06-12T15:05:35Z | 2023-06-12T15:05:34Z | severo |
1,708,553,564 | Dataset Viewer issue for kingjambal/jambal_common_voice | ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for kingjambal/jambal_common_voice: ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T10:37:57Z | 2023-05-15T09:11:55Z | 2023-05-15T09:11:55Z | kingjambal |
1,708,552,218 | Dataset Viewer issue for 0x22almostEvil/reasoning_bg_oa | ### Link
https://huggingface.co/datasets/0x22almostEvil/reasoning_bg_oa
### Description
The dataset viewer is not working for dataset 0x22almostEvil/reasoning_bg_oa.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for 0x22almostEvil/reasoning_bg_oa: ### Link
https://huggingface.co/datasets/0x22almostEvil/reasoning_bg_oa
### Description
The dataset viewer is not working for dataset 0x22almostEvil/reasoning_bg_oa.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T10:33:27Z | 2023-05-13T14:55:08Z | 2023-05-13T14:55:08Z | echo0x22 |
1,708,545,981 | Dataset Viewer issue for Abrumu/Fashion_controlnet_dataset_V2 | ### Link
https://huggingface.co/datasets/Abrumu/Fashion_controlnet_dataset_V2
### Description
The dataset viewer is not working for dataset Abrumu/Fashion_controlnet_dataset_V2.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for Abrumu/Fashion_controlnet_dataset_V2: ### Link
https://huggingface.co/datasets/Abrumu/Fashion_controlnet_dataset_V2
### Description
The dataset viewer is not working for dataset Abrumu/Fashion_controlnet_dataset_V2.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T10:09:09Z | 2023-05-15T09:22:31Z | 2023-05-15T09:22:31Z | abdelrahmanabdelghany |
1,708,417,580 | Dataset Viewer issue for lliillyy/controlnet_ap10k_val | ### Link
https://huggingface.co/datasets/lliillyy/controlnet_ap10k_val
### Description
The dataset viewer is not working for dataset lliillyy/controlnet_ap10k_val.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for lliillyy/controlnet_ap10k_val: ### Link
https://huggingface.co/datasets/lliillyy/controlnet_ap10k_val
### Description
The dataset viewer is not working for dataset lliillyy/controlnet_ap10k_val.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T03:50:41Z | 2023-05-15T09:27:04Z | 2023-05-15T09:27:03Z | Lliillyy |
1,708,342,990 | Adding full_scan field in opt-in-out cache | According to PR https://github.com/huggingface/moon-landing/pull/6289/files, we will need full_scan flag for UI
| Adding full_scan field in opt-in-out cache: According to PR https://github.com/huggingface/moon-landing/pull/6289/files, we will need full_scan flag for UI
| closed | 2023-05-12T23:43:13Z | 2023-05-15T12:45:48Z | 2023-05-15T12:42:39Z | AndreaFrancis |
1,708,132,717 | Dataset Viewer issue for claritylab/UTCD | ### Link
https://huggingface.co/datasets/claritylab/UTCD
### Description
The dataset viewer is not working for dataset claritylab/UTCD.
Error details:
```
Error code: TooManyColumnsError
```
I'm having trouble to get dataset viewer to work.
I did a bit of research:
- https://discuss.huggingface.co/t/the-dataset-preview-has-been-disabled-on-this-dataset/21339/3
- https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603/4
and looks like the only way to get data viewer to work is to call `push_to_hub` s.t. there are `.parquet` versions of my dataset in the `ref/convert/parquet` branch.
But I already had my dataset loading script with `json` in-place. I think I can both 1) keep the existing dataset loading script, and 2) add the `parquet` version of my datasets to the branch, as I think [`squad_v2`](https://huggingface.co/datasets/squad_v2/tree/main) is an example for that.
So I created a branch and pushed my dataset again via python:
```python
from huggingface_hub import create_branch
create_branch('claritylab/utcd', repo_type='dataset', branch='ref/convert/parquet')
dataset = load_dataset('claritylab/utcd', name='in-domain')
dataset.push_to_hub('claritylab/utcd', branch='ref/convert/parquet')
```
the code executed fine with no error and the output message below:
```
Downloading readme: 100%|ββββββββββ| 8.40k/8.40k [00:00<00:00, 1.41MB/s]
Found cached dataset utcd (/Users/stefanhg/.cache/huggingface/datasets/claritylab___utcd/aspect-normalized-in-domain/0.0.1/fe244a6f1dd95dfe9df993724e1b1ddb699c1900c2edb11a3380c7a2f6b78beb)
100%|ββββββββββ| 3/3 [00:00<00:00, 191.17it/s]
Pushing split train to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/116 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 35%|ββββ | 41/116 [00:00<00:00, 376.94ba/s]
Creating parquet from Arrow format: 100%|ββββββββββ| 116/116 [00:00<00:00, 316.08ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:00<?, ?it/s]
Upload 1 LFS files: 100%|ββββββββββ| 1/1 [00:11<00:00, 11.18s/it]
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:12<00:00, 12.06s/it]
Pushing split validation to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Creating parquet from Arrow format: 100%|ββββββββββ| 13/13 [00:00<00:00, 469.34ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:00<?, ?it/s]
Upload 1 LFS files: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.42s/it]
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.85s/it]
Pushing split test to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/169 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 27%|βββ | 45/169 [00:00<00:00, 449.16ba/s]
Creating parquet from Arrow format: 53%|ββββββ | 90/169 [00:00<00:00, 429.39ba/s]
Creating parquet from Arrow format: 100%|ββββββββββ| 169/169 [00:00<00:00, 530.78ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:00<?, ?it/s]
Upload 1 LFS files: 100%|ββββββββββ| 1/1 [00:13<00:00, 13.75s/it]
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:14<00:00, 14.59s/it]
```
but I don't see any difference on my dataset repository website: branch named `ref/convert/parquet` available, and thus nothing in the branch.
Please help. Thank you!
| Dataset Viewer issue for claritylab/UTCD: ### Link
https://huggingface.co/datasets/claritylab/UTCD
### Description
The dataset viewer is not working for dataset claritylab/UTCD.
Error details:
```
Error code: TooManyColumnsError
```
I'm having trouble to get dataset viewer to work.
I did a bit of research:
- https://discuss.huggingface.co/t/the-dataset-preview-has-been-disabled-on-this-dataset/21339/3
- https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603/4
and looks like the only way to get data viewer to work is to call `push_to_hub` s.t. there are `.parquet` versions of my dataset in the `ref/convert/parquet` branch.
But I already had my dataset loading script with `json` in-place. I think I can both 1) keep the existing dataset loading script, and 2) add the `parquet` version of my datasets to the branch, as I think [`squad_v2`](https://huggingface.co/datasets/squad_v2/tree/main) is an example for that.
So I created a branch and pushed my dataset again via python:
```python
from huggingface_hub import create_branch
create_branch('claritylab/utcd', repo_type='dataset', branch='ref/convert/parquet')
dataset = load_dataset('claritylab/utcd', name='in-domain')
dataset.push_to_hub('claritylab/utcd', branch='ref/convert/parquet')
```
the code executed fine with no error and the output message below:
```
Downloading readme: 100%|ββββββββββ| 8.40k/8.40k [00:00<00:00, 1.41MB/s]
Found cached dataset utcd (/Users/stefanhg/.cache/huggingface/datasets/claritylab___utcd/aspect-normalized-in-domain/0.0.1/fe244a6f1dd95dfe9df993724e1b1ddb699c1900c2edb11a3380c7a2f6b78beb)
100%|ββββββββββ| 3/3 [00:00<00:00, 191.17it/s]
Pushing split train to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/116 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 35%|ββββ | 41/116 [00:00<00:00, 376.94ba/s]
Creating parquet from Arrow format: 100%|ββββββββββ| 116/116 [00:00<00:00, 316.08ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:00<?, ?it/s]
Upload 1 LFS files: 100%|ββββββββββ| 1/1 [00:11<00:00, 11.18s/it]
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:12<00:00, 12.06s/it]
Pushing split validation to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Creating parquet from Arrow format: 100%|ββββββββββ| 13/13 [00:00<00:00, 469.34ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:00<?, ?it/s]
Upload 1 LFS files: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.42s/it]
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.85s/it]
Pushing split test to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/169 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 27%|βββ | 45/169 [00:00<00:00, 449.16ba/s]
Creating parquet from Arrow format: 53%|ββββββ | 90/169 [00:00<00:00, 429.39ba/s]
Creating parquet from Arrow format: 100%|ββββββββββ| 169/169 [00:00<00:00, 530.78ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:00<?, ?it/s]
Upload 1 LFS files: 100%|ββββββββββ| 1/1 [00:13<00:00, 13.75s/it]
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:14<00:00, 14.59s/it]
```
but I don't see any difference on my dataset repository website: branch named `ref/convert/parquet` available, and thus nothing in the branch.
Please help. Thank you!
| closed | 2023-05-12T19:53:16Z | 2023-05-16T08:53:22Z | 2023-05-16T08:53:22Z | StefanHeng |
1,707,648,352 | Dataset Viewer issue for kingjambal/jambal_common_voice | ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: JobManagerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset=''kingjambal/jambal_common_voice'' config=None split=None---
```
| Dataset Viewer issue for kingjambal/jambal_common_voice: ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: JobManagerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset=''kingjambal/jambal_common_voice'' config=None split=None---
```
| closed | 2023-05-12T13:33:14Z | 2023-05-15T09:07:39Z | 2023-05-15T09:07:39Z | kingjambal |
1,707,621,414 | Add a field, and rename another one, in /opt-in-out-urls | The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- rename `num_urls` into `num_scanned_urls`
- add `num_rows` with the total number of rows in the dataset/config/split. It would help understand which proportion of the dataset has been scanned. Note that the information is already available in `/size`, but I think it would be handy to have this information here. wdyt? | Add a field, and rename another one, in /opt-in-out-urls: The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- rename `num_urls` into `num_scanned_urls`
- add `num_rows` with the total number of rows in the dataset/config/split. It would help understand which proportion of the dataset has been scanned. Note that the information is already available in `/size`, but I think it would be handy to have this information here. wdyt? | closed | 2023-05-12T13:15:40Z | 2023-05-12T13:54:14Z | 2023-05-12T13:23:57Z | severo |
1,707,525,466 | change color and size of nodes | sorry i forgot to push one commit to the previous plot PR, this is to make test on nodes visible. | change color and size of nodes: sorry i forgot to push one commit to the previous plot PR, this is to make test on nodes visible. | closed | 2023-05-12T12:11:04Z | 2023-05-12T12:14:18Z | 2023-05-12T12:11:32Z | polinaeterna |
1,707,453,117 | Process part of the columns, instead of giving an error? | When the number of columns is above 1000, we don't process the split. See https://github.com/huggingface/datasets-server/issues/1143.
Should we instead "truncate", and only process the first 1000 columns, and give a hint to the user that only the first 1000 columns were used? | Process part of the columns, instead of giving an error?: When the number of columns is above 1000, we don't process the split. See https://github.com/huggingface/datasets-server/issues/1143.
Should we instead "truncate", and only process the first 1000 columns, and give a hint to the user that only the first 1000 columns were used? | open | 2023-05-12T11:26:37Z | 2024-06-19T14:11:48Z | null | severo |
1,707,180,264 | Dataset Viewer issue for cbt and "raw" configuration: Cannot GET | ### Link
https://huggingface.co/datasets/cbt
### Description
There is an issue with the URL to show a specific split for the "raw" configuration:
```
Cannot GET /datasets/cbt/viewer/raw/train
```
- See: https://huggingface.co/datasets/cbt/viewer/raw/train
However, it works when no split name is provided in the URL.
- See: https://huggingface.co/datasets/cbt/viewer/raw
Maybe the word "raw" has a special meaning in the URL and cannot be used as configuration name (as it is the case in GitHub)?
CC: @severo | Dataset Viewer issue for cbt and "raw" configuration: Cannot GET: ### Link
https://huggingface.co/datasets/cbt
### Description
There is an issue with the URL to show a specific split for the "raw" configuration:
```
Cannot GET /datasets/cbt/viewer/raw/train
```
- See: https://huggingface.co/datasets/cbt/viewer/raw/train
However, it works when no split name is provided in the URL.
- See: https://huggingface.co/datasets/cbt/viewer/raw
Maybe the word "raw" has a special meaning in the URL and cannot be used as configuration name (as it is the case in GitHub)?
CC: @severo | closed | 2023-05-12T08:18:41Z | 2023-06-19T15:11:49Z | 2023-06-19T15:04:23Z | albertvillanova |
1,706,479,162 | Removing non necessary attributes in job runner init | Small fix for https://github.com/huggingface/datasets-server/pull/1146#discussion_r1191621514
We don't need to initialize job_manager attributes on job_runner | Removing non necessary attributes in job runner init: Small fix for https://github.com/huggingface/datasets-server/pull/1146#discussion_r1191621514
We don't need to initialize job_manager attributes on job_runner | closed | 2023-05-11T20:09:19Z | 2023-05-12T14:58:13Z | 2023-05-12T14:55:23Z | AndreaFrancis |
1,705,780,784 | Refactor errors | <img width="154" alt="Capture dβeΜcran 2023-05-11 aΜ 14 59 20" src="https://github.com/huggingface/datasets-server/assets/1676121/d9282ccb-07a5-483c-8db4-a629c8b188bb">
^ yes!
### update
<img width="147" alt="Capture dβeΜcran 2023-05-15 aΜ 16 43 54" src="https://github.com/huggingface/datasets-server/assets/1676121/688eb8b5-393e-42de-9958-4eae5775d969">
| Refactor errors: <img width="154" alt="Capture dβeΜcran 2023-05-11 aΜ 14 59 20" src="https://github.com/huggingface/datasets-server/assets/1676121/d9282ccb-07a5-483c-8db4-a629c8b188bb">
^ yes!
### update
<img width="147" alt="Capture dβeΜcran 2023-05-15 aΜ 16 43 54" src="https://github.com/huggingface/datasets-server/assets/1676121/688eb8b5-393e-42de-9958-4eae5775d969">
| closed | 2023-05-11T12:58:44Z | 2023-05-17T13:28:26Z | 2023-05-17T13:25:41Z | severo |
1,705,677,480 | Rename `/split-names-from-streaming` job runner | Part of https://github.com/huggingface/datasets-server/issues/1086 and https://github.com/huggingface/datasets-server/issues/867 | Rename `/split-names-from-streaming` job runner: Part of https://github.com/huggingface/datasets-server/issues/1086 and https://github.com/huggingface/datasets-server/issues/867 | closed | 2023-05-11T11:55:43Z | 2023-05-19T16:23:07Z | 2023-05-19T12:35:31Z | polinaeterna |
1,705,431,098 | Remove should_skip_job | null | Remove should_skip_job: | closed | 2023-05-11T09:29:33Z | 2023-05-12T15:31:24Z | 2023-05-12T15:28:40Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.