id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
โ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,915,079,633 | fix: ๐ fix comment | the revision text (ie: 39871f7) automatically creates a link. No need to create one by hand. | fix: ๐ fix comment: the revision text (ie: 39871f7) automatically creates a link. No need to create one by hand. | closed | 2023-09-27T09:25:36Z | 2023-09-27T11:56:57Z | 2023-09-27T11:56:56Z | severo |
1,915,033,843 | Update duckdb to 0.9.0 | https://duckdb.org/2023/09/26/announcing-duckdb-090.html | Update duckdb to 0.9.0: https://duckdb.org/2023/09/26/announcing-duckdb-090.html | closed | 2023-09-27T08:59:30Z | 2024-02-14T07:55:16Z | 2024-02-06T16:58:35Z | severo |
1,914,197,608 | Adding NoIndexableColumnsError in codes to retry for backfill | null | Adding NoIndexableColumnsError in codes to retry for backfill: | closed | 2023-09-26T19:51:49Z | 2023-09-26T19:53:01Z | 2023-09-26T19:53:00Z | AndreaFrancis |
1,914,191,347 | feat: ๐ธ increase the number of workers | null | feat: ๐ธ increase the number of workers: | closed | 2023-09-26T19:46:54Z | 2023-09-26T19:47:51Z | 2023-09-26T19:47:50Z | severo |
1,913,789,555 | Reduce rows lru cache in /rows | someone is querying an image dataset ioclab/animesfw a lot at different offsets which fills the lru cache of image data and leads to OOM | Reduce rows lru cache in /rows: someone is querying an image dataset ioclab/animesfw a lot at different offsets which fills the lru cache of image data and leads to OOM | closed | 2023-09-26T15:36:05Z | 2023-09-26T19:24:13Z | 2023-09-26T19:24:12Z | lhoestq |
1,913,740,191 | Increase worker ram limit | following https://github.com/huggingface/datasets-server/pull/1856
since indexing C4 still OOMs
cc @rtrompier @severo | Increase worker ram limit: following https://github.com/huggingface/datasets-server/pull/1856
since indexing C4 still OOMs
cc @rtrompier @severo | closed | 2023-09-26T15:11:54Z | 2023-09-26T15:24:37Z | 2023-09-26T15:24:36Z | lhoestq |
1,913,664,960 | Log OOMs | it checks the exit code of the process to know if it was OOM or a job that errored out and crashed | Log OOMs: it checks the exit code of the process to know if it was OOM or a job that errored out and crashed | closed | 2023-09-26T14:36:19Z | 2023-09-26T15:09:33Z | 2023-09-26T15:09:32Z | lhoestq |
1,913,605,909 | feat: ๐ธ allow "post-messages" action | I removed the allowlist, because it requires to modify the code at two places, which is error prone (at least, I had the issue). | feat: ๐ธ allow "post-messages" action: I removed the allowlist, because it requires to modify the code at two places, which is error prone (at least, I had the issue). | closed | 2023-09-26T14:10:41Z | 2023-09-26T16:43:18Z | 2023-09-26T16:43:17Z | severo |
1,913,584,767 | Search times out for long queries | searching for small queries like `nerijs` or `nerijs pixel` works, but not for `nerijs pixel art xl` in https://huggingface.co/datasets/multimodalart/lora-fusing-preferences
It seems to be due to time out because long queries take longer to run | Search times out for long queries: searching for small queries like `nerijs` or `nerijs pixel` works, but not for `nerijs pixel art xl` in https://huggingface.co/datasets/multimodalart/lora-fusing-preferences
It seems to be due to time out because long queries take longer to run | closed | 2023-09-26T14:00:26Z | 2023-12-19T11:20:35Z | 2023-12-19T11:20:35Z | lhoestq |
1,913,568,676 | Pagination broken for some image datasets (too big row groups) | some datasets have row groups too big instead of being limited to 100 rows
eg https://huggingface.co/datasets/multimodalart/lora-fusing-preferences has all its 5xx rows in one row group in the parquet export
This causes pagination to return an error to avoid OOM | Pagination broken for some image datasets (too big row groups): some datasets have row groups too big instead of being limited to 100 rows
eg https://huggingface.co/datasets/multimodalart/lora-fusing-preferences has all its 5xx rows in one row group in the parquet export
This causes pagination to return an error to avoid OOM | closed | 2023-09-26T13:53:24Z | 2024-08-22T00:38:57Z | 2024-08-22T00:38:57Z | lhoestq |
1,913,493,313 | Improve docs on polars | See the [Polars guide](https://huggingface.co/docs/datasets-server/polars) | Improve docs on polars: See the [Polars guide](https://huggingface.co/docs/datasets-server/polars) | closed | 2023-09-26T13:16:26Z | 2023-09-26T13:34:24Z | 2023-09-26T13:34:20Z | severo |
1,913,427,513 | Publish more statistics about Hub datasets usage | - dataset count (private/public, with script/without script)
- validity (/is-valid) of the first 20 trending datasets | Publish more statistics about Hub datasets usage: - dataset count (private/public, with script/without script)
- validity (/is-valid) of the first 20 trending datasets | closed | 2023-09-26T12:40:54Z | 2023-09-28T11:46:05Z | 2023-09-28T11:46:05Z | severo |
1,913,361,181 | fix(worker): update resources limitation | null | fix(worker): update resources limitation: | closed | 2023-09-26T12:02:31Z | 2023-09-26T12:25:00Z | 2023-09-26T12:24:59Z | rtrompier |
1,912,749,432 | Add query parameter to /search 422 error description in openapi.json | Add query parameter to /search 422 error description in `openapi.json`. | Add query parameter to /search 422 error description in openapi.json: Add query parameter to /search 422 error description in `openapi.json`. | closed | 2023-09-26T06:06:12Z | 2023-09-26T14:33:04Z | 2023-09-26T14:32:31Z | albertvillanova |
1,912,385,405 | remove indexable columns limitation in split-duckdb-index | Needed for https://github.com/huggingface/datasets-server/pull/1418
Option 1: Since we need all the split datasets with the duckdb file for /filter, I propose just removing the limitation in the job runner as a noninvasive change.
Pending:
- [X] update tests
- [X] migration to add default field as has_fts=True for the existing cache records with status 200
~~Option 2:~~
~~As an alternative, I think we could split the tasks into two separate runners:~~
~~- split-duckdb-data: in charge of data ingestion for the duckdb file only.~~
~~- split-duck-index: in charge of indexing the existing table only if indexable columns exist (might need to first download the previous one and generate a new one + the index).~~
~~But there could be open questions: Will we keep only one file in the repo? Should we have two separate files in the repo? (one for data and another for data+index)?~~ | remove indexable columns limitation in split-duckdb-index: Needed for https://github.com/huggingface/datasets-server/pull/1418
Option 1: Since we need all the split datasets with the duckdb file for /filter, I propose just removing the limitation in the job runner as a noninvasive change.
Pending:
- [X] update tests
- [X] migration to add default field as has_fts=True for the existing cache records with status 200
~~Option 2:~~
~~As an alternative, I think we could split the tasks into two separate runners:~~
~~- split-duckdb-data: in charge of data ingestion for the duckdb file only.~~
~~- split-duck-index: in charge of indexing the existing table only if indexable columns exist (might need to first download the previous one and generate a new one + the index).~~
~~But there could be open questions: Will we keep only one file in the repo? Should we have two separate files in the repo? (one for data and another for data+index)?~~ | closed | 2023-09-25T22:43:52Z | 2023-09-26T19:24:56Z | 2023-09-26T19:24:55Z | AndreaFrancis |
1,912,229,404 | update pillow version | From pillow 9.5.0 to 10.0.1 -> Warning PYSEC-2023-175 | update pillow version: From pillow 9.5.0 to 10.0.1 -> Warning PYSEC-2023-175 | closed | 2023-09-25T20:24:58Z | 2023-09-25T20:51:12Z | 2023-09-25T20:51:11Z | AndreaFrancis |
1,911,530,905 | Show the visual explanation of configs and splits earlier in the docs | Show this image before the section https://huggingface.co/docs/datasets-server/configs_and_splits

[internal link](https://huggingface.slack.com/archives/C04BP5S7858/p1685473537226089)
| Show the visual explanation of configs and splits earlier in the docs: Show this image before the section https://huggingface.co/docs/datasets-server/configs_and_splits

[internal link](https://huggingface.slack.com/archives/C04BP5S7858/p1685473537226089)
| closed | 2023-09-25T13:36:12Z | 2024-02-09T18:02:05Z | 2024-02-09T18:02:04Z | severo |
1,911,521,293 | Create google colab to test the docs guides | Proposed by @mishig25 ([internal](https://huggingface.slack.com/archives/C0311GZ7R6K/p1684484245379859))
> moreover, providing an end-to-end example in python that can be doc-and-google-colab (like [transformers quicktour](https://huggingface.co/docs/transformers/quicktour)) would be great so that users can start playing right away
by end-to-end example, I mean something like 1. get data from datasets-server, 2. (optional) some small analysis (numpy), 3. plot/visualize something (mathplotlib)
| Create google colab to test the docs guides: Proposed by @mishig25 ([internal](https://huggingface.slack.com/archives/C0311GZ7R6K/p1684484245379859))
> moreover, providing an end-to-end example in python that can be doc-and-google-colab (like [transformers quicktour](https://huggingface.co/docs/transformers/quicktour)) would be great so that users can start playing right away
by end-to-end example, I mean something like 1. get data from datasets-server, 2. (optional) some small analysis (numpy), 3. plot/visualize something (mathplotlib)
| open | 2023-09-25T13:31:19Z | 2023-09-25T13:31:29Z | null | severo |
1,911,516,724 | The "Javascript" and "Curl" links don't work in docs | The buttons "Javascript" and "Curl" don't work in the docs: https://huggingface.co/docs/datasets-server/quick_start#check-dataset-validity
<img width="375" alt="Capture dโeฬcran 2023-09-25 aฬ 15 28 22" src="https://github.com/huggingface/datasets-server/assets/1676121/cad5d213-0885-4ca5-a199-6a02f5b0cb02">
cc @mishig25 | The "Javascript" and "Curl" links don't work in docs: The buttons "Javascript" and "Curl" don't work in the docs: https://huggingface.co/docs/datasets-server/quick_start#check-dataset-validity
<img width="375" alt="Capture dโeฬcran 2023-09-25 aฬ 15 28 22" src="https://github.com/huggingface/datasets-server/assets/1676121/cad5d213-0885-4ca5-a199-6a02f5b0cb02">
cc @mishig25 | closed | 2023-09-25T13:29:00Z | 2023-09-26T14:29:50Z | 2023-09-26T14:29:50Z | severo |
1,911,231,608 | Update docs and openapi.json with supported audio | Update docs and openapi.json with supported audio.
Related to:
- https://github.com/huggingface/datasets-server/pull/1792
- https://github.com/huggingface/datasets-server/pull/1743 | Update docs and openapi.json with supported audio: Update docs and openapi.json with supported audio.
Related to:
- https://github.com/huggingface/datasets-server/pull/1792
- https://github.com/huggingface/datasets-server/pull/1743 | closed | 2023-09-25T11:02:10Z | 2023-09-25T13:11:10Z | 2023-09-25T13:10:34Z | albertvillanova |
1,911,129,761 | feat: ๐ธ add call to action to get feedback on notifications | see
https://github.com/huggingface/datasets-server/pull/1824#issuecomment-1733328326
<img width="759" alt="Capture dโeฬcran 2023-09-25 aฬ 12 01 48" src="https://github.com/huggingface/datasets-server/assets/1676121/b5ca5d15-4f4e-454c-854a-c9681a5bb9cd">
| feat: ๐ธ add call to action to get feedback on notifications: see
https://github.com/huggingface/datasets-server/pull/1824#issuecomment-1733328326
<img width="759" alt="Capture dโeฬcran 2023-09-25 aฬ 12 01 48" src="https://github.com/huggingface/datasets-server/assets/1676121/b5ca5d15-4f4e-454c-854a-c9681a5bb9cd">
| closed | 2023-09-25T10:01:59Z | 2023-09-25T10:53:31Z | 2023-09-25T10:53:30Z | severo |
1,910,989,841 | feat: ๐ธ remove backfill after deployment | null | feat: ๐ธ remove backfill after deployment: | closed | 2023-09-25T08:46:21Z | 2023-09-25T08:47:03Z | 2023-09-25T08:46:26Z | severo |
1,910,868,493 | feat: ๐ธ increase version of split-first-rows-from-streaming | see https://github.com/huggingface/datasets-server/pull/1829. Also launch a backfill after the deployment | feat: ๐ธ increase version of split-first-rows-from-streaming: see https://github.com/huggingface/datasets-server/pull/1829. Also launch a backfill after the deployment | closed | 2023-09-25T07:35:57Z | 2023-09-25T07:36:44Z | 2023-09-25T07:36:15Z | severo |
1,909,445,920 | Descriptive stats: unsupported operand type(s) for -: 'NoneType' and 'NoneType | There are 1516 cache records with the following error:
```
unsupported operand type(s) for -: 'NoneType' and 'NoneType'
unsupported operand type(s) for *: 'NoneType' and 'float'
unsupported operand type(s) for *: 'decimal.Decimal' and 'float'"
```
See example https://datasets-server.huggingface.co/statistics?dataset=mozilla-foundation/common_voice_9_0&config=ha&split=validation
See for more details in db:
`{kind:"split-descriptive-statistics", http_status:500, "content.error":/unsupported operand/}`
Related to https://github.com/huggingface/datasets-server/issues/1443 | Descriptive stats: unsupported operand type(s) for -: 'NoneType' and 'NoneType: There are 1516 cache records with the following error:
```
unsupported operand type(s) for -: 'NoneType' and 'NoneType'
unsupported operand type(s) for *: 'NoneType' and 'float'
unsupported operand type(s) for *: 'decimal.Decimal' and 'float'"
```
See example https://datasets-server.huggingface.co/statistics?dataset=mozilla-foundation/common_voice_9_0&config=ha&split=validation
See for more details in db:
`{kind:"split-descriptive-statistics", http_status:500, "content.error":/unsupported operand/}`
Related to https://github.com/huggingface/datasets-server/issues/1443 | closed | 2023-09-22T19:52:32Z | 2024-04-29T18:13:32Z | 2024-04-29T18:13:32Z | AndreaFrancis |
1,909,426,894 | Descriptive stats: duckdb.Error: float division by zero | There are 1248 cache records with the following error:

See example https://datasets-server.huggingface.co/statistics?dataset=mstz%2Fsonar&config=sonar&split=train&offset=0&limit=100
See for more details:
`{kind:"split-descriptive-statistics", http_status:500, error_code:"UnexpectedError","content.error":"float division by zero", "details.copied_from_artifact":{$exists:false}}`
Looks like the issue is here
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/descriptive_statistics.py#L113
and here https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/descriptive_statistics.py#L120 when bin_size is 0.
Related to https://github.com/huggingface/datasets-server/issues/1443 | Descriptive stats: duckdb.Error: float division by zero: There are 1248 cache records with the following error:

See example https://datasets-server.huggingface.co/statistics?dataset=mstz%2Fsonar&config=sonar&split=train&offset=0&limit=100
See for more details:
`{kind:"split-descriptive-statistics", http_status:500, error_code:"UnexpectedError","content.error":"float division by zero", "details.copied_from_artifact":{$exists:false}}`
Looks like the issue is here
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/descriptive_statistics.py#L113
and here https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/descriptive_statistics.py#L120 when bin_size is 0.
Related to https://github.com/huggingface/datasets-server/issues/1443 | closed | 2023-09-22T19:35:25Z | 2023-12-14T20:41:49Z | 2023-12-14T20:41:49Z | AndreaFrancis |
1,909,214,988 | Fix /rows for empty split | Fix https://github.com/huggingface/datasets-server/issues/1752 | Fix /rows for empty split: Fix https://github.com/huggingface/datasets-server/issues/1752 | closed | 2023-09-22T16:36:36Z | 2023-09-25T09:09:14Z | 2023-09-25T09:09:13Z | lhoestq |
1,908,952,108 | Use a dedicated user in docker images | Not root as currently done - https://github.com/huggingface/datasets-server/blob/main/services/worker/Dockerfile
Proposed by @XciD [on Slack](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675864533240589?thread_ts=1675678412.096669&cid=C04L6P8KNQ5) (internal)
> this beeing said, moving moving to a dedicated user on dataset-server is a good practice.
| Use a dedicated user in docker images: Not root as currently done - https://github.com/huggingface/datasets-server/blob/main/services/worker/Dockerfile
Proposed by @XciD [on Slack](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675864533240589?thread_ts=1675678412.096669&cid=C04L6P8KNQ5) (internal)
> this beeing said, moving moving to a dedicated user on dataset-server is a good practice.
| open | 2023-09-22T13:50:39Z | 2023-10-27T11:30:57Z | null | severo |
1,908,722,827 | Update /search external docs URL in openapi.json | Update /search external docs URL in `openapi.json`, once that this PR is merged:
- #1767
fixing:
- #1663 | Update /search external docs URL in openapi.json: Update /search external docs URL in `openapi.json`, once that this PR is merged:
- #1767
fixing:
- #1663 | closed | 2023-09-22T11:21:07Z | 2023-09-22T12:02:14Z | 2023-09-22T12:01:37Z | albertvillanova |
1,908,344,217 | Update cryptography dependency to 41.0.4 to fix vulnerability | This should fix 5 dependabot alerts.
Fix #1837. | Update cryptography dependency to 41.0.4 to fix vulnerability: This should fix 5 dependabot alerts.
Fix #1837. | closed | 2023-09-22T07:29:08Z | 2023-09-22T08:35:18Z | 2023-09-22T08:35:17Z | albertvillanova |
1,908,333,985 | CI is broken due to vulnerability in cryptography 41.0.3 | See: https://github.com/huggingface/datasets-server/actions/runs/6271133994/job/17030193207?pr=1418
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
------------ ------- ------------------- ------------
cryptography 41.0.3 GHSA-v8gr-m533-ghj9 41.0.4
```
| CI is broken due to vulnerability in cryptography 41.0.3: See: https://github.com/huggingface/datasets-server/actions/runs/6271133994/job/17030193207?pr=1418
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
------------ ------- ------------------- ------------
cryptography 41.0.3 GHSA-v8gr-m533-ghj9 41.0.4
```
| closed | 2023-09-22T07:21:52Z | 2023-09-22T08:35:18Z | 2023-09-22T08:35:18Z | albertvillanova |
1,907,495,195 | increase resources | null | increase resources: | closed | 2023-09-21T18:00:59Z | 2023-09-21T18:02:37Z | 2023-09-21T18:02:36Z | AndreaFrancis |
1,907,473,161 | fix response type in delete /obsolete-cache | null | fix response type in delete /obsolete-cache: | closed | 2023-09-21T17:47:14Z | 2023-09-21T18:10:42Z | 2023-09-21T18:10:41Z | AndreaFrancis |
1,907,465,929 | Obsolete cache Tab in admin UI | Adding tab for obsolete cache actions (get list and delete)

| Obsolete cache Tab in admin UI: Adding tab for obsolete cache actions (get list and delete)

| closed | 2023-09-21T17:42:36Z | 2023-09-22T14:50:14Z | 2023-09-22T14:50:13Z | AndreaFrancis |
1,907,325,527 | Descriptive stats: wrong class label proportions when the labels are `-1` (no label) | e.g. for https://huggingface.co/datasets/glue/viewer/cola/test
the response contains
```json
{
"column_name": "label",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 1,
"frequencies": {
"acceptable": 1063
}
}
}
```
cc @polinaeterna | Descriptive stats: wrong class label proportions when the labels are `-1` (no label): e.g. for https://huggingface.co/datasets/glue/viewer/cola/test
the response contains
```json
{
"column_name": "label",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 1,
"frequencies": {
"acceptable": 1063
}
}
}
```
cc @polinaeterna | closed | 2023-09-21T16:12:46Z | 2023-09-29T15:55:32Z | 2023-09-29T15:55:32Z | lhoestq |
1,907,264,922 | unblock cyberharem | they're now using the NFAA tag.
I'll also delete the cache entries with `DatasetInBlockListError ` | unblock cyberharem: they're now using the NFAA tag.
I'll also delete the cache entries with `DatasetInBlockListError ` | closed | 2023-09-21T15:36:49Z | 2023-09-21T17:16:00Z | 2023-09-21T15:56:30Z | lhoestq |
1,907,218,411 | Descriptive stats: `duckdb.IOException: IO Error: No files found that match the pattern ...` | Happens for [imagenet-1k](https://huggingface.co/datasets/imagenet-1k):
```
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 168, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 467, in compute\n compute_descriptive_statistics_response(\n", "
File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 396, in compute_descriptive_statistics_response\n cat_column_stats: CategoricalStatisticsItem = compute_categorical_statistics(\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 232, in compute_categorical_statistics\n categories: List[Tuple[int, int]] = con.sql(\n",
"duckdb.IOException: IO Error: No files found that match the pattern \"/storage/stats-cache/48010914651841-split-descriptive-statistics-imagenet-1k-3acf5c2b/default/test/*.parquet\"\n"
```
cc @polinaeterna | Descriptive stats: `duckdb.IOException: IO Error: No files found that match the pattern ...`: Happens for [imagenet-1k](https://huggingface.co/datasets/imagenet-1k):
```
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 168, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 467, in compute\n compute_descriptive_statistics_response(\n", "
File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 396, in compute_descriptive_statistics_response\n cat_column_stats: CategoricalStatisticsItem = compute_categorical_statistics(\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 232, in compute_categorical_statistics\n categories: List[Tuple[int, int]] = con.sql(\n",
"duckdb.IOException: IO Error: No files found that match the pattern \"/storage/stats-cache/48010914651841-split-descriptive-statistics-imagenet-1k-3acf5c2b/default/test/*.parquet\"\n"
```
cc @polinaeterna | closed | 2023-09-21T15:12:57Z | 2023-11-06T17:21:41Z | 2023-11-06T17:21:41Z | lhoestq |
1,906,924,368 | Descriptive stats: `duckdb.Error: Invalid Error: don't know what type` | Happens e.g. for [xnli](https://huggingface.co/datasets/xnli) for the `all_languages` config for the `train` split
```
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 160, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 404, in compute\n compute_descriptive_statistics_response(\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 323, in compute_descriptive_statistics_response\n con.sql(\n",
"duckdb.Error: Invalid Error: don't know what type: \u000e\n"
```
cc @polinaeterna | Descriptive stats: `duckdb.Error: Invalid Error: don't know what type`: Happens e.g. for [xnli](https://huggingface.co/datasets/xnli) for the `all_languages` config for the `train` split
```
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 160, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 404, in compute\n compute_descriptive_statistics_response(\n",
" File \"/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py\", line 323, in compute_descriptive_statistics_response\n con.sql(\n",
"duckdb.Error: Invalid Error: don't know what type: \u000e\n"
```
cc @polinaeterna | closed | 2023-09-21T13:01:28Z | 2023-12-19T11:22:54Z | 2023-12-19T11:22:54Z | lhoestq |
1,906,858,828 | feat: ๐ธ add "truncated" field to /first-rows | fixes #1731.
It will require refreshing split-first-rows-from-streaming and split-first-rows-from-parquet for all the datasets. The backfill will do it since I updated the versions.
Also, I put the "truncated" field as optional for now, but once the update is complete, I'll create another PR to make it mandatory (we cannot do a database migration since we don't know the value).
---
Followup PRs:
- [x] increase version of `split-first-rows-from-streaming` - 350K waiting jobs now. Let's wait until Monday to do the following step -> https://github.com/huggingface/datasets-server/pull/1844
- [x] refresh the remaining cache entries until we get 0:
```
db.cachedResponsesBlue.count({kind: {"$in": ["split-first-rows-from-streaming", "split-first-rows-from-parquet"]}, http_status: 200, "content.truncated": {"$exists": false}})
```
List of datasets to refresh:
```
db.cachedResponsesBlue.aggregate([
{$match: {kind: "split-first-rows-from-parquet", http_status: 200, "content.truncated": {"$exists": false}}},
{$group: {_id: "$dataset", count: {$sum: 1}}},
{$sort: {count: -1}},
{$project: {dataset: "$_id", _id: 0, count: 1}}
])
```
and
```
db.cachedResponsesBlue.aggregate([
{$match: {kind: "split-first-rows-from-parquet", http_status: 200, "content.truncated": {"$exists": false}}},
{$group: {_id: null, datasets: {$addToSet: "$dataset"}}},
])
```
2023/09/25: refreshing the remaining cache entries for `split-first-rows-from-parquet`.
- [x] set `truncated` field as mandatory, not optional. https://github.com/huggingface/datasets-server/pull/1896
| feat: ๐ธ add "truncated" field to /first-rows: fixes #1731.
It will require refreshing split-first-rows-from-streaming and split-first-rows-from-parquet for all the datasets. The backfill will do it since I updated the versions.
Also, I put the "truncated" field as optional for now, but once the update is complete, I'll create another PR to make it mandatory (we cannot do a database migration since we don't know the value).
---
Followup PRs:
- [x] increase version of `split-first-rows-from-streaming` - 350K waiting jobs now. Let's wait until Monday to do the following step -> https://github.com/huggingface/datasets-server/pull/1844
- [x] refresh the remaining cache entries until we get 0:
```
db.cachedResponsesBlue.count({kind: {"$in": ["split-first-rows-from-streaming", "split-first-rows-from-parquet"]}, http_status: 200, "content.truncated": {"$exists": false}})
```
List of datasets to refresh:
```
db.cachedResponsesBlue.aggregate([
{$match: {kind: "split-first-rows-from-parquet", http_status: 200, "content.truncated": {"$exists": false}}},
{$group: {_id: "$dataset", count: {$sum: 1}}},
{$sort: {count: -1}},
{$project: {dataset: "$_id", _id: 0, count: 1}}
])
```
and
```
db.cachedResponsesBlue.aggregate([
{$match: {kind: "split-first-rows-from-parquet", http_status: 200, "content.truncated": {"$exists": false}}},
{$group: {_id: null, datasets: {$addToSet: "$dataset"}}},
])
```
2023/09/25: refreshing the remaining cache entries for `split-first-rows-from-parquet`.
- [x] set `truncated` field as mandatory, not optional. https://github.com/huggingface/datasets-server/pull/1896
| closed | 2023-09-21T12:25:42Z | 2023-09-29T16:23:48Z | 2023-09-21T15:22:42Z | severo |
1,906,378,081 | docs: โ๏ธ remove doc for deleted /hub-cache endpoint | followup for #1826 | docs: โ๏ธ remove doc for deleted /hub-cache endpoint: followup for #1826 | closed | 2023-09-21T08:13:18Z | 2023-09-21T11:01:54Z | 2023-09-21T08:14:35Z | severo |
1,906,318,447 | Update pandas to 2.1.1 and duckdb to 0.9.0 | Update `pandas` dependency to 2.1.1 version, their latest patch release: https://github.com/pandas-dev/pandas/releases/tag/v2.1.1
Note that:
- several pandas versions were installed
- most of them were 2.0.2: there was a newer 2.0.3 patch release
- one was 2.1.0: for this, the newer 2.1.1 patch release is recommended
Additionally, update `duckdb` dependency to 0.9.0: https://github.com/duckdb/duckdb/releases/tag/v0.9.0
Note there was an incompatibility with `pandas=2.1.1` for `duckdb<0.9.0`. See comment below: https://github.com/huggingface/datasets-server/pull/1827#issuecomment-1729090034 | Update pandas to 2.1.1 and duckdb to 0.9.0: Update `pandas` dependency to 2.1.1 version, their latest patch release: https://github.com/pandas-dev/pandas/releases/tag/v2.1.1
Note that:
- several pandas versions were installed
- most of them were 2.0.2: there was a newer 2.0.3 patch release
- one was 2.1.0: for this, the newer 2.1.1 patch release is recommended
Additionally, update `duckdb` dependency to 0.9.0: https://github.com/duckdb/duckdb/releases/tag/v0.9.0
Note there was an incompatibility with `pandas=2.1.1` for `duckdb<0.9.0`. See comment below: https://github.com/huggingface/datasets-server/pull/1827#issuecomment-1729090034 | closed | 2023-09-21T07:37:43Z | 2023-10-03T09:37:01Z | 2023-10-02T16:54:16Z | albertvillanova |
1,905,069,201 | feat: ๐ธ remove /hub-cache endpoint | We now use `sse/hub-cache` (server sent events, instead of a paginated REST API). | feat: ๐ธ remove /hub-cache endpoint: We now use `sse/hub-cache` (server sent events, instead of a paginated REST API). | closed | 2023-09-20T13:59:59Z | 2023-09-20T15:09:47Z | 2023-09-20T15:09:46Z | severo |
1,905,024,834 | Fix the e2e tests in CI | The e2e tests last forever and fail. | Fix the e2e tests in CI: The e2e tests last forever and fail. | closed | 2023-09-20T13:38:23Z | 2023-09-20T15:14:43Z | 2023-09-20T15:10:13Z | severo |
1,904,491,105 | Send a message in Hub discussion after Parquet conversion | fixes #1270.
Implements https://github.com/huggingface/datasets-server/issues/1270#issuecomment-1726384167 | Send a message in Hub discussion after Parquet conversion: fixes #1270.
Implements https://github.com/huggingface/datasets-server/issues/1270#issuecomment-1726384167 | closed | 2023-09-20T08:45:06Z | 2023-10-27T14:12:19Z | 2023-09-25T08:27:08Z | severo |
1,904,403,387 | Avoid providing "mixed" cache while refreshing a dataset | When a dataset is being refreshed, or even after having been refreshed, its cache can be incoherent.
## Case 1
Reported by @AndreaFrancis: during a dataset refresh, the search was enabled in the viewer, but the "first rows" were not shown. The `split-duckdb-index` step had finished, while the `split-first-rows-from...` steps were still being computed. The data shown by the viewer were not coherent, and it would be hard for a user to understand what was occurring.
## Case 2
Seen with https://huggingface.co/datasets/jbrendsel/ECTSum. All the steps had been computed, and the `config-size` step had an error (`PreviousStepFormatError`). The erroneous cache entry remained after fixing the issue and refreshing the dataset. The reason is a previous step (`dataset-config-names`) now gives an error, and the rest of the cache entries are never cleaned, so they stay in the database even if useless. See https://github.com/huggingface/datasets-server/issues/1582#issuecomment-1727151243 and https://github.com/huggingface/datasets-server/issues/1285.
## Proposal
Give each "DAG" execution an identifier, and use the same identifier for all the endpoints. To ensure coherence between the responses, change the identifier we use in the API only when all the steps have been refreshed. If we send the identifier in the response, the client can also use the identifier value to check the coherence. If there is no previous identifier in the database, use the identifier of the current incomplete DAG (the dataset viewer already handles an incomplete state).
Once the new DAG execution has finished, we can delete the cache entries with other identifiers. | Avoid providing "mixed" cache while refreshing a dataset: When a dataset is being refreshed, or even after having been refreshed, its cache can be incoherent.
## Case 1
Reported by @AndreaFrancis: during a dataset refresh, the search was enabled in the viewer, but the "first rows" were not shown. The `split-duckdb-index` step had finished, while the `split-first-rows-from...` steps were still being computed. The data shown by the viewer were not coherent, and it would be hard for a user to understand what was occurring.
## Case 2
Seen with https://huggingface.co/datasets/jbrendsel/ECTSum. All the steps had been computed, and the `config-size` step had an error (`PreviousStepFormatError`). The erroneous cache entry remained after fixing the issue and refreshing the dataset. The reason is a previous step (`dataset-config-names`) now gives an error, and the rest of the cache entries are never cleaned, so they stay in the database even if useless. See https://github.com/huggingface/datasets-server/issues/1582#issuecomment-1727151243 and https://github.com/huggingface/datasets-server/issues/1285.
## Proposal
Give each "DAG" execution an identifier, and use the same identifier for all the endpoints. To ensure coherence between the responses, change the identifier we use in the API only when all the steps have been refreshed. If we send the identifier in the response, the client can also use the identifier value to check the coherence. If there is no previous identifier in the database, use the identifier of the current incomplete DAG (the dataset viewer already handles an incomplete state).
Once the new DAG execution has finished, we can delete the cache entries with other identifiers. | closed | 2023-09-20T07:58:22Z | 2024-01-09T15:42:19Z | 2024-01-09T15:42:19Z | severo |
1,903,668,804 | Delete cache and assets for non existent datasets | Related to https://github.com/huggingface/datasets-server/issues/1285 and https://github.com/huggingface/datasets-server/issues/1284
This PR will fix only the obsolete cache by non existent datasets, the ones related to obsolete configs/splits will be treated in another PR
| Delete cache and assets for non existent datasets: Related to https://github.com/huggingface/datasets-server/issues/1285 and https://github.com/huggingface/datasets-server/issues/1284
This PR will fix only the obsolete cache by non existent datasets, the ones related to obsolete configs/splits will be treated in another PR
| closed | 2023-09-19T20:18:56Z | 2023-09-21T15:50:40Z | 2023-09-21T15:50:39Z | AndreaFrancis |
1,902,836,702 | feat: ๐ธ add HIGH priority level | And set the "refresh" jobs to HIGH priority in the admin space.
This way, the manual actions we do at https://huggingface.co/spaces/datasets-maintainers/datasets-server-admin-ui are effective more quickly. | feat: ๐ธ add HIGH priority level: And set the "refresh" jobs to HIGH priority in the admin space.
This way, the manual actions we do at https://huggingface.co/spaces/datasets-maintainers/datasets-server-admin-ui are effective more quickly. | closed | 2023-09-19T12:07:16Z | 2023-09-19T13:30:57Z | 2023-09-19T12:29:47Z | severo |
1,902,449,115 | CI invalid Tailscale API key | Our CI is broken because of an error with an invalid Tailscale API key.
See: https://github.com/huggingface/datasets-server/actions/runs/6232652311/job/16916246173?pr=1817 | CI invalid Tailscale API key: Our CI is broken because of an error with an invalid Tailscale API key.
See: https://github.com/huggingface/datasets-server/actions/runs/6232652311/job/16916246173?pr=1817 | closed | 2023-09-19T08:16:07Z | 2023-09-19T12:00:07Z | 2023-09-19T12:00:07Z | albertvillanova |
1,902,442,548 | Fix typo in docs index page | Fix typo in docs index page.
Fix #1814. | Fix typo in docs index page: Fix typo in docs index page.
Fix #1814. | closed | 2023-09-19T08:11:54Z | 2023-09-19T13:30:45Z | 2023-09-19T13:29:51Z | albertvillanova |
1,902,411,970 | Remove "operation" field from /sse/hub-cache, and add `?all=true` | Required by https://github.com/huggingface/moon-landing/pull/7456#discussion_r1328913356 and https://github.com/huggingface/moon-landing/pull/7456#discussion_r1328910039 (internal) | Remove "operation" field from /sse/hub-cache, and add `?all=true`: Required by https://github.com/huggingface/moon-landing/pull/7456#discussion_r1328913356 and https://github.com/huggingface/moon-landing/pull/7456#discussion_r1328910039 (internal) | closed | 2023-09-19T07:52:17Z | 2023-09-19T09:30:49Z | 2023-09-19T09:30:48Z | severo |
1,902,403,589 | Lower maxRowGroupByteSizeForCopy | As proposed by @lhoestq, this PR reduces the value of `maxRowGroupByteSizeForCopy` from 500_000_000 to 300_000_000, so that it is compatible with `maxArrowDataInMemory` (300_000_000).
This should fix the `UnexpectedApiError` that we have in some full viewer pages.
Fix #1795. | Lower maxRowGroupByteSizeForCopy: As proposed by @lhoestq, this PR reduces the value of `maxRowGroupByteSizeForCopy` from 500_000_000 to 300_000_000, so that it is compatible with `maxArrowDataInMemory` (300_000_000).
This should fix the `UnexpectedApiError` that we have in some full viewer pages.
Fix #1795. | closed | 2023-09-19T07:46:49Z | 2023-09-19T12:02:34Z | 2023-09-19T12:02:33Z | albertvillanova |
1,901,316,021 | Compute descriptive statistics for string type | - treat as category if num of unique values <= 40
- compute numerical stats over string length otherwise
UPD: decreased the number from 40 to 30, any objections? :)) | Compute descriptive statistics for string type: - treat as category if num of unique values <= 40
- compute numerical stats over string length otherwise
UPD: decreased the number from 40 to 30, any objections? :)) | closed | 2023-09-18T16:28:35Z | 2023-09-26T19:30:01Z | 2023-09-26T19:30:00Z | polinaeterna |
1,901,150,469 | custom exception for dataset script errors in job runners | Part of https://github.com/huggingface/datasets-server/issues/1443 for:
Many errors are due to the dataset script error, unrelated to the job runners' logic.
Adding script error recognition at the job manager level based on stack trace content. All of the issues related to the dataset follow the pattern `datasets_modules/datasets`.
I think this will help us separate the external errors from ours.
| custom exception for dataset script errors in job runners: Part of https://github.com/huggingface/datasets-server/issues/1443 for:
Many errors are due to the dataset script error, unrelated to the job runners' logic.
Adding script error recognition at the job manager level based on stack trace content. All of the issues related to the dataset follow the pattern `datasets_modules/datasets`.
I think this will help us separate the external errors from ours.
| closed | 2023-09-18T15:07:02Z | 2023-09-19T07:11:19Z | 2023-09-18T20:47:23Z | AndreaFrancis |
1,901,127,331 | Small glitch on docs index page | <img width="1064" alt="Screenshot 2023-09-18 at 16 50 40" src="https://github.com/huggingface/datasets-server/assets/326577/bc299055-50fe-476e-bcf0-1f0a3411ba24">
| Small glitch on docs index page: <img width="1064" alt="Screenshot 2023-09-18 at 16 50 40" src="https://github.com/huggingface/datasets-server/assets/326577/bc299055-50fe-476e-bcf0-1f0a3411ba24">
| closed | 2023-09-18T14:55:25Z | 2023-09-19T13:29:52Z | 2023-09-19T13:29:52Z | julien-c |
1,899,001,457 | ci: ๐ก check for unused arguments in functions | not on tests, since we have a lot of them (fixtures). | ci: ๐ก check for unused arguments in functions: not on tests, since we have a lot of them (fixtures). | closed | 2023-09-15T19:52:26Z | 2023-09-18T13:20:42Z | 2023-09-18T13:17:23Z | severo |
1,898,955,464 | erroneous "en" config in parquet-and-info for dataset aakanksha/udpos | Here:
https://github.com/huggingface/datasets-server/blob/09e3f9827048d552c4aefc8125fe6366dbeabc34/services/worker/src/worker/job_runners/config/parquet_and_info.py#L422
`builder.config.name` is changed from `default`(the correct value) to `en`, which breaks the final response (parquet files are created under [`en/`](https://huggingface.co/datasets/aakanksha/udpos/tree/refs%2Fconvert%2Fparquet/en) directory, which later does not match with the expected `default` config).
I have no idea why, or how to fix that. | erroneous "en" config in parquet-and-info for dataset aakanksha/udpos : Here:
https://github.com/huggingface/datasets-server/blob/09e3f9827048d552c4aefc8125fe6366dbeabc34/services/worker/src/worker/job_runners/config/parquet_and_info.py#L422
`builder.config.name` is changed from `default`(the correct value) to `en`, which breaks the final response (parquet files are created under [`en/`](https://huggingface.co/datasets/aakanksha/udpos/tree/refs%2Fconvert%2Fparquet/en) directory, which later does not match with the expected `default` config).
I have no idea why, or how to fix that. | closed | 2023-09-15T19:20:52Z | 2024-02-06T15:07:53Z | 2024-02-06T15:07:52Z | severo |
1,898,440,399 | Fix empty datasets | Two fixes to remove the erroneous `PreviousStepFormatError` for empty datasets. See `nicaplz/autotrain-data-autogpt` for example. | Fix empty datasets: Two fixes to remove the erroneous `PreviousStepFormatError` for empty datasets. See `nicaplz/autotrain-data-autogpt` for example. | closed | 2023-09-15T13:29:48Z | 2023-09-15T13:39:49Z | 2023-09-15T13:39:48Z | severo |
1,898,421,459 | Delete obsolete cache entries (config="namespace--dataset", instead of default") | All the datasets now use "default" as the default config name. However, the cache entries that used the previous default value ("namespace--dataset") are still in the database, even if not used anymore. We have to delete them.
| Delete obsolete cache entries (config="namespace--dataset", instead of default"): All the datasets now use "default" as the default config name. However, the cache entries that used the previous default value ("namespace--dataset") are still in the database, even if not used anymore. We have to delete them.
| closed | 2023-09-15T13:18:30Z | 2023-09-15T13:25:05Z | 2023-09-15T13:24:58Z | severo |
1,898,054,280 | Add flake8-pep585 dev dependency | Add `flake8-pep585` dev dependency so that our CI `flake8` checks that no deprecated type from `typing` is used, as established by [PEP 585](https://peps.python.org/pep-0585/).
Fix #1808.
CC: @severo | Add flake8-pep585 dev dependency: Add `flake8-pep585` dev dependency so that our CI `flake8` checks that no deprecated type from `typing` is used, as established by [PEP 585](https://peps.python.org/pep-0585/).
Fix #1808.
CC: @severo | closed | 2023-09-15T09:27:50Z | 2023-09-15T09:50:20Z | 2023-09-15T09:50:19Z | albertvillanova |
1,897,977,228 | Make CI check no deprecated typing type is used | As suggested by @severo (https://github.com/huggingface/datasets-server/pull/1805#pullrequestreview-1627340639), we should make our CI automatically check that no deprecated type from `typing` is used.
Related to:
- #1805 | Make CI check no deprecated typing type is used: As suggested by @severo (https://github.com/huggingface/datasets-server/pull/1805#pullrequestreview-1627340639), we should make our CI automatically check that no deprecated type from `typing` is used.
Related to:
- #1805 | closed | 2023-09-15T08:41:08Z | 2023-09-15T09:50:20Z | 2023-09-15T09:50:20Z | albertvillanova |
1,896,906,328 | Validate job runner | We should be able to create an invalid job runners (e.g. for tests but most importantly because we need to do it to kill zombie jobs). Therefore I moved the check_config_exists() validation check to a dedicated `.validate()` method that is called only when the job will be run.
This should fix the kill_zombies failures that make all our workers crash in loop now. | Validate job runner: We should be able to create an invalid job runners (e.g. for tests but most importantly because we need to do it to kill zombie jobs). Therefore I moved the check_config_exists() validation check to a dedicated `.validate()` method that is called only when the job will be run.
This should fix the kill_zombies failures that make all our workers crash in loop now. | closed | 2023-09-14T16:16:36Z | 2023-09-15T13:19:56Z | 2023-09-14T16:46:37Z | lhoestq |
1,896,859,878 | Block cyberharem | Re-appyling https://github.com/huggingface/datasets-server/pull/1791 since the author didn't add the NFAA tag
This will disable the preview for all the future datasets.
I'll also delete the cached previews for the existing datasets from this namespace.
cc @giadilli | Block cyberharem: Re-appyling https://github.com/huggingface/datasets-server/pull/1791 since the author didn't add the NFAA tag
This will disable the preview for all the future datasets.
I'll also delete the cached previews for the existing datasets from this namespace.
cc @giadilli | closed | 2023-09-14T15:49:37Z | 2023-09-14T16:22:36Z | 2023-09-14T16:22:35Z | lhoestq |
1,896,382,218 | Replace deprecated typing types | Since Python 3.9 (this project is in Python 3.9.15), some types defined in `typing` module are deprecated.
This PR replaces deprecated `typing` types. | Replace deprecated typing types: Since Python 3.9 (this project is in Python 3.9.15), some types defined in `typing` module are deprecated.
This PR replaces deprecated `typing` types. | closed | 2023-09-14T11:36:54Z | 2023-09-15T08:30:40Z | 2023-09-15T08:30:38Z | albertvillanova |
1,896,038,658 | Fix typo in SplitFirstRowsFromParquetJobRunner | Fix typo in `SplitFirstRowsFromParquetJobRunner`. | Fix typo in SplitFirstRowsFromParquetJobRunner: Fix typo in `SplitFirstRowsFromParquetJobRunner`. | closed | 2023-09-14T08:34:22Z | 2023-09-15T12:02:44Z | 2023-09-15T12:02:43Z | albertvillanova |
1,895,331,165 | Increase resources | null | Increase resources: | closed | 2023-09-13T22:07:45Z | 2023-09-13T22:08:44Z | 2023-09-13T22:08:43Z | AndreaFrancis |
1,894,910,143 | Adding parent validation at job runner level | Before, we were adding split validation in some job runners, but we should to it in all according to their granularity.
**Config** level should validate that the config exists in the dataset.
**Split** level should validate that the split exists in the config and that the config exists in the dataset.
**Example:**
When trying to `force-refresh` for step `split-first-rows-from-parquet` for dataset=ColumbiaNLP/FLUTE config=ColumbiaNLP--FLUTE split=train it failed with error:
`404, message='Not Found', url=URL('https://huggingface.co/datasets/ColumbiaNLP/FLUTE/resolve/refs%2Fconvert%2Fparquet/ColumbiaNLP--FLUTE/train/0000.parquet')
`
But it is wrong because the config no longer exists for the dataset and it should have thrown a SplitNotFoundError.
Related to https://github.com/huggingface/datasets-server/issues/1696 but meanwhile we could at least validate that config and split exist before processing.
| Adding parent validation at job runner level: Before, we were adding split validation in some job runners, but we should to it in all according to their granularity.
**Config** level should validate that the config exists in the dataset.
**Split** level should validate that the split exists in the config and that the config exists in the dataset.
**Example:**
When trying to `force-refresh` for step `split-first-rows-from-parquet` for dataset=ColumbiaNLP/FLUTE config=ColumbiaNLP--FLUTE split=train it failed with error:
`404, message='Not Found', url=URL('https://huggingface.co/datasets/ColumbiaNLP/FLUTE/resolve/refs%2Fconvert%2Fparquet/ColumbiaNLP--FLUTE/train/0000.parquet')
`
But it is wrong because the config no longer exists for the dataset and it should have thrown a SplitNotFoundError.
Related to https://github.com/huggingface/datasets-server/issues/1696 but meanwhile we could at least validate that config and split exist before processing.
| closed | 2023-09-13T16:52:58Z | 2023-09-13T22:04:36Z | 2023-09-13T22:04:34Z | AndreaFrancis |
1,894,373,325 | Update gitpython to 3.1.36 version | Update `gitpython` to 3.1.36 version to fix the CVE-2023-41040 vulnerability.
Fix #1785. | Update gitpython to 3.1.36 version: Update `gitpython` to 3.1.36 version to fix the CVE-2023-41040 vulnerability.
Fix #1785. | closed | 2023-09-13T11:58:24Z | 2023-09-13T15:21:29Z | 2023-09-13T15:21:27Z | albertvillanova |
1,893,940,604 | fix: ๐ remove unnecessary configs | comments by @AndreaFrancis in https://github.com/huggingface/datasets-server/pull/1784 | fix: ๐ remove unnecessary configs: comments by @AndreaFrancis in https://github.com/huggingface/datasets-server/pull/1784 | closed | 2023-09-13T07:42:36Z | 2023-09-13T08:10:44Z | 2023-09-13T08:10:42Z | severo |
1,893,150,561 | build(deps-dev): bump gitpython from 3.1.34 to 3.1.35 in /libs/libcommon | Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.34 to 3.1.35.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.35 - a fix for CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump actions/checkout from 3 to 4 by <a href="https://github.com/dependabot"><code>@โdependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1643">gitpython-developers/GitPython#1643</a></li>
<li>Fix 'Tree' object has no attribute '_name' when submodule path is normal path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li>Fix CVE-2023-41040 by <a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
<li>Only make config more permissive in tests that need it by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1648">gitpython-developers/GitPython#1648</a></li>
<li>Added test for PR <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1645">#1645</a> submodule path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1647">gitpython-developers/GitPython#1647</a></li>
<li>Fix Windows environment variable upcasing bug by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1650">gitpython-developers/GitPython#1650</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li><a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c8e303ffd3204195fc7f768f7b17dc5bde3dd53f"><code>c8e303f</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/09e1b3dbae3437cf3e2c7fb0326128c2e20b372e"><code>09e1b3d</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1650">#1650</a> from EliahKagan/envcase</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8017421ade3d1058d753e24119d1f7796a84abe6"><code>8017421</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1647">#1647</a> from CosmosAtlas/master</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/fafb4f6651eac242a7e143831fbe23d10beaf89b"><code>fafb4f6</code></a> updated docs to better describe testing procedure with new repo</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9da24d46c64eaf4c7db65c0f67324801fafbf30d"><code>9da24d4</code></a> add test for submodule path not owned by submodule case</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/eebdb25ee6e88d8fce83ea0970bd08f5e5301f65"><code>eebdb25</code></a> Eliminate duplication of git.util.cwd logic</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c7fad20be5df0a86636459bf673ff9242a82e1fc"><code>c7fad20</code></a> Fix Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/7296e5c021450743e5fe824e94b830a73eebc4c8"><code>7296e5c</code></a> Make test helper script a file, for readability</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d88372a11ac145d92013dcc64b7d21a5a6ad3a91"><code>d88372a</code></a> Add test for Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/11839ab5ce4d721d127283f1d37ca712d0b79027"><code>11839ab</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1648">#1648</a> from EliahKagan/file-protocol</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | build(deps-dev): bump gitpython from 3.1.34 to 3.1.35 in /libs/libcommon: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.34 to 3.1.35.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.35 - a fix for CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump actions/checkout from 3 to 4 by <a href="https://github.com/dependabot"><code>@โdependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1643">gitpython-developers/GitPython#1643</a></li>
<li>Fix 'Tree' object has no attribute '_name' when submodule path is normal path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li>Fix CVE-2023-41040 by <a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
<li>Only make config more permissive in tests that need it by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1648">gitpython-developers/GitPython#1648</a></li>
<li>Added test for PR <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1645">#1645</a> submodule path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1647">gitpython-developers/GitPython#1647</a></li>
<li>Fix Windows environment variable upcasing bug by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1650">gitpython-developers/GitPython#1650</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li><a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c8e303ffd3204195fc7f768f7b17dc5bde3dd53f"><code>c8e303f</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/09e1b3dbae3437cf3e2c7fb0326128c2e20b372e"><code>09e1b3d</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1650">#1650</a> from EliahKagan/envcase</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8017421ade3d1058d753e24119d1f7796a84abe6"><code>8017421</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1647">#1647</a> from CosmosAtlas/master</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/fafb4f6651eac242a7e143831fbe23d10beaf89b"><code>fafb4f6</code></a> updated docs to better describe testing procedure with new repo</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9da24d46c64eaf4c7db65c0f67324801fafbf30d"><code>9da24d4</code></a> add test for submodule path not owned by submodule case</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/eebdb25ee6e88d8fce83ea0970bd08f5e5301f65"><code>eebdb25</code></a> Eliminate duplication of git.util.cwd logic</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c7fad20be5df0a86636459bf673ff9242a82e1fc"><code>c7fad20</code></a> Fix Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/7296e5c021450743e5fe824e94b830a73eebc4c8"><code>7296e5c</code></a> Make test helper script a file, for readability</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d88372a11ac145d92013dcc64b7d21a5a6ad3a91"><code>d88372a</code></a> Add test for Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/11839ab5ce4d721d127283f1d37ca712d0b79027"><code>11839ab</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1648">#1648</a> from EliahKagan/file-protocol</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-09-12T19:44:52Z | 2023-09-15T07:46:26Z | 2023-09-15T07:46:23Z | dependabot[bot] |
1,893,146,854 | build(deps-dev): bump gitpython from 3.1.34 to 3.1.35 in /e2e | Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.34 to 3.1.35.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.35 - a fix for CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump actions/checkout from 3 to 4 by <a href="https://github.com/dependabot"><code>@โdependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1643">gitpython-developers/GitPython#1643</a></li>
<li>Fix 'Tree' object has no attribute '_name' when submodule path is normal path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li>Fix CVE-2023-41040 by <a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
<li>Only make config more permissive in tests that need it by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1648">gitpython-developers/GitPython#1648</a></li>
<li>Added test for PR <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1645">#1645</a> submodule path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1647">gitpython-developers/GitPython#1647</a></li>
<li>Fix Windows environment variable upcasing bug by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1650">gitpython-developers/GitPython#1650</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li><a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c8e303ffd3204195fc7f768f7b17dc5bde3dd53f"><code>c8e303f</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/09e1b3dbae3437cf3e2c7fb0326128c2e20b372e"><code>09e1b3d</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1650">#1650</a> from EliahKagan/envcase</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8017421ade3d1058d753e24119d1f7796a84abe6"><code>8017421</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1647">#1647</a> from CosmosAtlas/master</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/fafb4f6651eac242a7e143831fbe23d10beaf89b"><code>fafb4f6</code></a> updated docs to better describe testing procedure with new repo</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9da24d46c64eaf4c7db65c0f67324801fafbf30d"><code>9da24d4</code></a> add test for submodule path not owned by submodule case</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/eebdb25ee6e88d8fce83ea0970bd08f5e5301f65"><code>eebdb25</code></a> Eliminate duplication of git.util.cwd logic</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c7fad20be5df0a86636459bf673ff9242a82e1fc"><code>c7fad20</code></a> Fix Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/7296e5c021450743e5fe824e94b830a73eebc4c8"><code>7296e5c</code></a> Make test helper script a file, for readability</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d88372a11ac145d92013dcc64b7d21a5a6ad3a91"><code>d88372a</code></a> Add test for Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/11839ab5ce4d721d127283f1d37ca712d0b79027"><code>11839ab</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1648">#1648</a> from EliahKagan/file-protocol</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | build(deps-dev): bump gitpython from 3.1.34 to 3.1.35 in /e2e: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.34 to 3.1.35.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.35 - a fix for CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump actions/checkout from 3 to 4 by <a href="https://github.com/dependabot"><code>@โdependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1643">gitpython-developers/GitPython#1643</a></li>
<li>Fix 'Tree' object has no attribute '_name' when submodule path is normal path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li>Fix CVE-2023-41040 by <a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
<li>Only make config more permissive in tests that need it by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1648">gitpython-developers/GitPython#1648</a></li>
<li>Added test for PR <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1645">#1645</a> submodule path by <a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1647">gitpython-developers/GitPython#1647</a></li>
<li>Fix Windows environment variable upcasing bug by <a href="https://github.com/EliahKagan"><code>@โEliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1650">gitpython-developers/GitPython#1650</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/CosmosAtlas"><code>@โCosmosAtlas</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1645">gitpython-developers/GitPython#1645</a></li>
<li><a href="https://github.com/facutuesca"><code>@โfacutuesca</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1644">gitpython-developers/GitPython#1644</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c8e303ffd3204195fc7f768f7b17dc5bde3dd53f"><code>c8e303f</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/09e1b3dbae3437cf3e2c7fb0326128c2e20b372e"><code>09e1b3d</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1650">#1650</a> from EliahKagan/envcase</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8017421ade3d1058d753e24119d1f7796a84abe6"><code>8017421</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1647">#1647</a> from CosmosAtlas/master</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/fafb4f6651eac242a7e143831fbe23d10beaf89b"><code>fafb4f6</code></a> updated docs to better describe testing procedure with new repo</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/9da24d46c64eaf4c7db65c0f67324801fafbf30d"><code>9da24d4</code></a> add test for submodule path not owned by submodule case</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/eebdb25ee6e88d8fce83ea0970bd08f5e5301f65"><code>eebdb25</code></a> Eliminate duplication of git.util.cwd logic</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/c7fad20be5df0a86636459bf673ff9242a82e1fc"><code>c7fad20</code></a> Fix Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/7296e5c021450743e5fe824e94b830a73eebc4c8"><code>7296e5c</code></a> Make test helper script a file, for readability</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d88372a11ac145d92013dcc64b7d21a5a6ad3a91"><code>d88372a</code></a> Add test for Windows env var upcasing regression</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/11839ab5ce4d721d127283f1d37ca712d0b79027"><code>11839ab</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1648">#1648</a> from EliahKagan/file-protocol</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-09-12T19:42:47Z | 2023-09-15T07:46:19Z | 2023-09-15T07:46:14Z | dependabot[bot] |
1,891,210,397 | fix: split-descriptive-statistics fails for datasets with list features | While reviewing https://github.com/huggingface/datasets-server/issues/1443 I found that for the first error AttributeError, there is a minor bug in split-descriptive-statistics, it fails when features include list (sequences).
After this fix, we should force-refresh cached records for:
```
db.cachedResponsesBlue.countDocuments({error_code:"UnexpectedError", "details.copied_from_artifact":{$exists:false}, "details.cause_exception":"AttributeError",kind:"split-descriptive-statistics"})
9748
``` | fix: split-descriptive-statistics fails for datasets with list features: While reviewing https://github.com/huggingface/datasets-server/issues/1443 I found that for the first error AttributeError, there is a minor bug in split-descriptive-statistics, it fails when features include list (sequences).
After this fix, we should force-refresh cached records for:
```
db.cachedResponsesBlue.countDocuments({error_code:"UnexpectedError", "details.copied_from_artifact":{$exists:false}, "details.cause_exception":"AttributeError",kind:"split-descriptive-statistics"})
9748
``` | closed | 2023-09-11T20:24:45Z | 2023-09-12T19:32:40Z | 2023-09-12T19:32:39Z | AndreaFrancis |
1,891,118,608 | Adding back incremental queue metrics | Now that the queue is working well, add back incremental queue metrics.
Note.- A rollback was done in this PR https://github.com/huggingface/datasets-server/pull/1698/files
| Adding back incremental queue metrics: Now that the queue is working well, add back incremental queue metrics.
Note.- A rollback was done in this PR https://github.com/huggingface/datasets-server/pull/1698/files
| closed | 2023-09-11T19:18:45Z | 2023-09-12T15:08:32Z | 2023-09-12T15:08:30Z | AndreaFrancis |
1,889,909,580 | Viewer works on dataset landing page but not on full viewer page: UnexpectedApiError | The viewer works on the dataset landing page; see for example: https://huggingface.co/datasets/hyperdemocracy/us-congress-bills

However it raises `UnexpectedApiError ` on the full viewer page; see: https://huggingface.co/datasets/hyperdemocracy/us-congress-bills/viewer/default/train

Fix Hub issue: https://huggingface.co/datasets/hyperdemocracy/us-congress-bills/discussions/1
CC: @galtay @lhoestq | Viewer works on dataset landing page but not on full viewer page: UnexpectedApiError : The viewer works on the dataset landing page; see for example: https://huggingface.co/datasets/hyperdemocracy/us-congress-bills

However it raises `UnexpectedApiError ` on the full viewer page; see: https://huggingface.co/datasets/hyperdemocracy/us-congress-bills/viewer/default/train

Fix Hub issue: https://huggingface.co/datasets/hyperdemocracy/us-congress-bills/discussions/1
CC: @galtay @lhoestq | closed | 2023-09-11T08:02:09Z | 2023-09-19T12:02:34Z | 2023-09-19T12:02:34Z | albertvillanova |
1,887,874,129 | Audio examples not showing | When using the dataset viewer (for audio datasets), the audio examples are not showing. Instead, the following message is displayed: "Not supported with pagination yet".
This issue only arises when you go into the dataset viewer. The previewed small version does show the audio example correctly.


| Audio examples not showing: When using the dataset viewer (for audio datasets), the audio examples are not showing. Instead, the following message is displayed: "Not supported with pagination yet".
This issue only arises when you go into the dataset viewer. The previewed small version does show the audio example correctly.


| closed | 2023-09-08T11:46:46Z | 2023-09-09T12:44:34Z | 2023-09-09T12:44:34Z | gillesmeyhi |
1,887,364,827 | Revert "feat: ๐ธ disable the viewer for org CyberHarem (#1791)" | This reverts commit 66d755eb3f020e7a46866919c636387d46756ff5.
to let 48h to the user to respond before we block
following https://github.com/huggingface/datasets-server/pull/1791#issuecomment-1711357091 | Revert "feat: ๐ธ disable the viewer for org CyberHarem (#1791)": This reverts commit 66d755eb3f020e7a46866919c636387d46756ff5.
to let 48h to the user to respond before we block
following https://github.com/huggingface/datasets-server/pull/1791#issuecomment-1711357091 | closed | 2023-09-08T10:10:14Z | 2023-09-08T11:09:54Z | 2023-09-08T11:09:53Z | lhoestq |
1,887,304,024 | Enable audio in the Dataset Viewer pagination | It's now working at correct speed, e.g. https://huggingface.co/datasets/arabic_speech_corpus/viewer/clean/train?p=1 takes only a few seconds. It was only enabled on this dataset for testing purposes.
Following #1788
Fixes (hopefully once and for all) https://github.com/huggingface/datasets-server/issues/1255 | Enable audio in the Dataset Viewer pagination: It's now working at correct speed, e.g. https://huggingface.co/datasets/arabic_speech_corpus/viewer/clean/train?p=1 takes only a few seconds. It was only enabled on this dataset for testing purposes.
Following #1788
Fixes (hopefully once and for all) https://github.com/huggingface/datasets-server/issues/1255 | closed | 2023-09-08T09:31:58Z | 2023-09-08T10:08:33Z | 2023-09-08T10:08:32Z | lhoestq |
1,887,282,184 | feat: ๐ธ disable the viewer for org CyberHarem | null | feat: ๐ธ disable the viewer for org CyberHarem: | closed | 2023-09-08T09:17:29Z | 2023-09-08T10:10:55Z | 2023-09-08T09:18:15Z | severo |
1,886,221,297 | Change sort column in search | null | Change sort column in search: | closed | 2023-09-07T16:30:00Z | 2023-09-07T16:51:50Z | 2023-09-07T16:51:49Z | AndreaFrancis |
1,886,090,978 | feat: ๐ธ add search pods | null | feat: ๐ธ add search pods: | closed | 2023-09-07T15:11:07Z | 2023-09-07T15:11:52Z | 2023-09-07T15:11:13Z | severo |
1,886,028,128 | Don't convert mp3 or wav | Following https://github.com/huggingface/datasets-server/pull/1743
Fixes (hopefully) #1255 if the speed is finally good enough. | Don't convert mp3 or wav: Following https://github.com/huggingface/datasets-server/pull/1743
Fixes (hopefully) #1255 if the speed is finally good enough. | closed | 2023-09-07T14:34:58Z | 2023-09-07T16:38:26Z | 2023-09-07T16:38:25Z | lhoestq |
1,885,670,625 | docs: โ๏ธ add an example of search on image dataset in openapi | null | docs: โ๏ธ add an example of search on image dataset in openapi: | closed | 2023-09-07T11:06:00Z | 2023-09-07T12:01:54Z | 2023-09-07T12:01:20Z | severo |
1,885,657,282 | docs: โ๏ธ fix the docs: search is not performed on ClassLabel | See also https://github.com/huggingface/hub-docs/pull/940 | docs: โ๏ธ fix the docs: search is not performed on ClassLabel: See also https://github.com/huggingface/hub-docs/pull/940 | closed | 2023-09-07T10:57:04Z | 2023-09-08T11:27:46Z | 2023-09-08T11:27:12Z | severo |
1,885,347,227 | Upgrade gitpython (again again again) | There is a vulnerability (that surely does not affect us). See https://github.com/huggingface/datasets-server/security/dependabot/254.
Currently, no new version to install, so, let's wait for a new version to be published. | Upgrade gitpython (again again again): There is a vulnerability (that surely does not affect us). See https://github.com/huggingface/datasets-server/security/dependabot/254.
Currently, no new version to install, so, let's wait for a new version to be published. | closed | 2023-09-07T08:12:10Z | 2023-09-13T15:21:29Z | 2023-09-13T15:21:29Z | severo |
1,884,672,829 | Add /sse/hub-cache to update the Hub's backend cache | TODO:
- [x] setup replicaset in mongo server, because it's required to be able to watch the collection changes
- [x] feature: send an SSE on every update of a `dataset-hub-cache` cache entry
- [x] feature: ^ only if some value has changed (?)
- [x] feature: On the first call to /sse/hub-cache, emit one SSE per `dataset-hub-cache` cache entry (initial synchronization)
- [x] <strike>document in OpenAPI (no need to add to the docs, as it's internal)</strike> -> No: it's not supported by the OpenAPI specification (see https://github.com/OAI/OpenAPI-Specification/issues/396)
- [x] add sseApi in https://github.com/huggingface/infra-deployments/blob/main/apps/datasets-server/datasets-server-prod.yaml and https://github.com/huggingface/infra-deployments/blob/main/apps/datasets-server/datasets-server-dev.yaml (internal)?
- [x] document the usage of /sse/hub-cache in `services/sse-api/README.md` since it's not possible to do so in the OpenAPI spec
Maybe:
- [ ] add e2e tests
- [ ] monitor the overhead that it generates?
- [ ] How to make features 1 and 3 work well together? What if a `dataset-hub-cache` cache entry is updated in the middle of the initial synchronization SSE? How do we test this?
- [ ] protect the endpoint so only the Hub can request it?
| Add /sse/hub-cache to update the Hub's backend cache: TODO:
- [x] setup replicaset in mongo server, because it's required to be able to watch the collection changes
- [x] feature: send an SSE on every update of a `dataset-hub-cache` cache entry
- [x] feature: ^ only if some value has changed (?)
- [x] feature: On the first call to /sse/hub-cache, emit one SSE per `dataset-hub-cache` cache entry (initial synchronization)
- [x] <strike>document in OpenAPI (no need to add to the docs, as it's internal)</strike> -> No: it's not supported by the OpenAPI specification (see https://github.com/OAI/OpenAPI-Specification/issues/396)
- [x] add sseApi in https://github.com/huggingface/infra-deployments/blob/main/apps/datasets-server/datasets-server-prod.yaml and https://github.com/huggingface/infra-deployments/blob/main/apps/datasets-server/datasets-server-dev.yaml (internal)?
- [x] document the usage of /sse/hub-cache in `services/sse-api/README.md` since it's not possible to do so in the OpenAPI spec
Maybe:
- [ ] add e2e tests
- [ ] monitor the overhead that it generates?
- [ ] How to make features 1 and 3 work well together? What if a `dataset-hub-cache` cache entry is updated in the middle of the initial synchronization SSE? How do we test this?
- [ ] protect the endpoint so only the Hub can request it?
| closed | 2023-09-06T20:04:37Z | 2023-09-13T07:42:51Z | 2023-09-12T19:41:21Z | severo |
1,884,408,966 | feat: ๐ธ reduce prod workers | null | feat: ๐ธ reduce prod workers: | closed | 2023-09-06T16:58:49Z | 2023-09-06T16:59:34Z | 2023-09-06T16:58:54Z | severo |
1,884,314,016 | feat: ๐ธ upgrade datasets from 2.14.4 to 2.14.5 | fixes https://github.com/huggingface/datasets-server/issues/1781 | feat: ๐ธ upgrade datasets from 2.14.4 to 2.14.5: fixes https://github.com/huggingface/datasets-server/issues/1781 | closed | 2023-09-06T15:57:22Z | 2023-09-06T18:49:18Z | 2023-09-06T18:49:17Z | severo |
1,884,302,977 | update datasets to 2.14.5 | https://github.com/huggingface/datasets/releases/tag/2.14.5
I think it does not require changes in the code, right?
Also: I'm not sure if we have to refresh datasets to fix errors. Maybe:
- Do not filter out .zip extensions from no-script datasets by @albertvillanova in https://github.com/huggingface/datasets/pull/6208
Could you confirm @lhoestq @albertvillanova @polinaeterna or @mariosasko ? | update datasets to 2.14.5: https://github.com/huggingface/datasets/releases/tag/2.14.5
I think it does not require changes in the code, right?
Also: I'm not sure if we have to refresh datasets to fix errors. Maybe:
- Do not filter out .zip extensions from no-script datasets by @albertvillanova in https://github.com/huggingface/datasets/pull/6208
Could you confirm @lhoestq @albertvillanova @polinaeterna or @mariosasko ? | closed | 2023-09-06T15:50:20Z | 2023-09-07T12:09:45Z | 2023-09-06T18:49:19Z | severo |
1,884,149,721 | fix: ๐ adnother fix in Helm chart | null | fix: ๐ adnother fix in Helm chart: | closed | 2023-09-06T14:33:09Z | 2023-09-06T14:33:40Z | 2023-09-06T14:33:15Z | severo |
1,884,136,197 | fix: ๐ fix the volumes | follow up of #1778 | fix: ๐ fix the volumes: follow up of #1778 | closed | 2023-09-06T14:26:44Z | 2023-09-06T14:27:28Z | 2023-09-06T14:27:27Z | severo |
1,884,037,095 | feat: ๐ธ use EFS instead of NFS for cached assets | Depends on https://github.com/huggingface/infra/pull/680 being deployed first (internal). | feat: ๐ธ use EFS instead of NFS for cached assets: Depends on https://github.com/huggingface/infra/pull/680 being deployed first (internal). | closed | 2023-09-06T13:36:45Z | 2023-09-06T14:06:43Z | 2023-09-06T14:06:41Z | severo |
1,883,945,357 | feat: ๐ธ remove /valid endpoint | not used anymore by the Hub, and not meant to be public.
Waiting for:
- [x] https://github.com/huggingface/datasets-server/pull/1784
- [x] https://github.com/huggingface/moon-landing/pull/7456 (internal) | feat: ๐ธ remove /valid endpoint: not used anymore by the Hub, and not meant to be public.
Waiting for:
- [x] https://github.com/huggingface/datasets-server/pull/1784
- [x] https://github.com/huggingface/moon-landing/pull/7456 (internal) | closed | 2023-09-06T12:48:04Z | 2023-09-21T12:50:42Z | 2023-09-21T12:50:04Z | severo |
1,883,826,796 | docs: โ๏ธ add mention of the search feature | null | docs: โ๏ธ add mention of the search feature: | closed | 2023-09-06T11:37:12Z | 2023-09-06T11:42:11Z | 2023-09-06T11:37:41Z | severo |
1,882,996,585 | Unsafe warning for github code clean | The [Github Code Clean data set ](https://huggingface.co/datasets/codeparrot/github-code-clean) shows 47 unsafe files. May I request a re-scan with ClamAV to confirm that these are indeed unsafe? I have seen elsewhere (#882) that a re-scan resulted in files that were previously deemed unsafe were in fact OK. | Unsafe warning for github code clean: The [Github Code Clean data set ](https://huggingface.co/datasets/codeparrot/github-code-clean) shows 47 unsafe files. May I request a re-scan with ClamAV to confirm that these are indeed unsafe? I have seen elsewhere (#882) that a re-scan resulted in files that were previously deemed unsafe were in fact OK. | closed | 2023-09-06T01:38:04Z | 2023-09-06T12:33:45Z | 2023-09-06T07:35:28Z | chinghuachen |
1,882,475,140 | Squash the commits in the refs/convert/parquet branch | See https://github.com/huggingface/huggingface_hub/pull/1639#issuecomment-1706264639 | Squash the commits in the refs/convert/parquet branch: See https://github.com/huggingface/huggingface_hub/pull/1639#issuecomment-1706264639 | open | 2023-09-05T17:53:07Z | 2023-09-05T17:53:17Z | null | severo |
1,882,067,003 | Set maximum allowed arrow data for /rows to avoid OOM | Return an error if the Arrow data to load from the Parquet row groups are bigger than 300MB
This impacts /rows and the first-rows-from-parquet job
Fix https://github.com/huggingface/datasets-server/issues/1772
Fix https://github.com/huggingface/datasets-server/issues/1527 | Set maximum allowed arrow data for /rows to avoid OOM: Return an error if the Arrow data to load from the Parquet row groups are bigger than 300MB
This impacts /rows and the first-rows-from-parquet job
Fix https://github.com/huggingface/datasets-server/issues/1772
Fix https://github.com/huggingface/datasets-server/issues/1527 | closed | 2023-09-05T14:10:53Z | 2023-09-05T18:16:31Z | 2023-09-05T17:33:35Z | lhoestq |
1,881,955,630 | /rows OOMs when the arrow data are too big | it should raise an error instead, to not impact the other dataset pages | /rows OOMs when the arrow data are too big: it should raise an error instead, to not impact the other dataset pages | closed | 2023-09-05T13:12:16Z | 2023-09-05T17:33:37Z | 2023-09-05T17:33:37Z | lhoestq |
1,881,953,569 | Too big row group size for certain image datasets, causing /rows to OOM | eg https://huggingface.co/datasets/StyleMuseum/yu_ta/tree/refs%2Fconvert%2Fparquet/default/train
row group size is 1000 instead of 100, causing /rows to OOM | Too big row group size for certain image datasets, causing /rows to OOM: eg https://huggingface.co/datasets/StyleMuseum/yu_ta/tree/refs%2Fconvert%2Fparquet/default/train
row group size is 1000 instead of 100, causing /rows to OOM | closed | 2023-09-05T13:11:07Z | 2023-09-08T13:43:14Z | 2023-09-08T13:43:06Z | lhoestq |
1,881,682,945 | Less rows workers and less uvicorn workers | Following https://github.com/huggingface/datasets-server/pull/1769
since deployed failed again because there are no nodes available. Sureley because /rows can't run on nodes made for dataset workers.
I also reduced the amount of uvicorn workers in case it was the reason the pods were OOMing (too many parallel calls) | Less rows workers and less uvicorn workers: Following https://github.com/huggingface/datasets-server/pull/1769
since deployed failed again because there are no nodes available. Sureley because /rows can't run on nodes made for dataset workers.
I also reduced the amount of uvicorn workers in case it was the reason the pods were OOMing (too many parallel calls) | closed | 2023-09-05T10:29:40Z | 2023-09-05T11:33:44Z | 2023-09-05T10:30:19Z | lhoestq |
1,881,635,946 | Reduce other workers | Following https://github.com/huggingface/datasets-server/pull/1768 that failed to deploy because of too many workers | Reduce other workers: Following https://github.com/huggingface/datasets-server/pull/1768 that failed to deploy because of too many workers | closed | 2023-09-05T10:00:59Z | 2023-09-05T10:20:47Z | 2023-09-05T10:20:46Z | lhoestq |
1,881,617,460 | More rows worker | they were too much spammed and were OOMing a lot this morning | More rows worker: they were too much spammed and were OOMing a lot this morning | closed | 2023-09-05T09:50:22Z | 2023-09-05T10:00:37Z | 2023-09-05T09:52:11Z | lhoestq |
1,881,466,966 | Fix doc | fixes #1766
Also fix #1663 | Fix doc: fixes #1766
Also fix #1663 | closed | 2023-09-05T08:24:21Z | 2023-09-22T11:20:34Z | 2023-09-06T08:33:34Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.