id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
1,881,422,809
Outdated documentation
The /splits endpoint does not return the number of samples anymore: https://huggingface.co/docs/datasets-server/splits
Outdated documentation: The /splits endpoint does not return the number of samples anymore: https://huggingface.co/docs/datasets-server/splits
closed
2023-09-05T07:58:22Z
2023-09-06T08:33:35Z
2023-09-06T08:33:35Z
severo
1,880,885,797
feat: 🎸 add resources
null
feat: 🎸 add resources:
closed
2023-09-04T21:33:37Z
2023-09-04T21:34:10Z
2023-09-04T21:33:41Z
severo
1,880,745,138
Queue metrics every 2 minutes
null
Queue metrics every 2 minutes:
closed
2023-09-04T18:49:01Z
2023-09-04T18:51:35Z
2023-09-04T18:51:34Z
AndreaFrancis
1,880,722,127
feat: 🎸 reduce the number of workers (/4)
null
feat: 🎸 reduce the number of workers (/4):
closed
2023-09-04T18:23:20Z
2023-09-04T18:24:02Z
2023-09-04T18:23:24Z
severo
1,880,621,967
feat: 🎸 reduce the max ram
because the nodes have 32GB RAM, and we cannot fit 2x 16GB due to the small overhead kubenetes requires aside of the pods. cc @lhoestq
feat: 🎸 reduce the max ram: because the nodes have 32GB RAM, and we cannot fit 2x 16GB due to the small overhead kubenetes requires aside of the pods. cc @lhoestq
closed
2023-09-04T16:44:15Z
2023-09-04T16:44:53Z
2023-09-04T16:44:23Z
severo
1,880,611,419
fix: 🐛 upgrade gitpython
null
fix: 🐛 upgrade gitpython:
closed
2023-09-04T16:34:18Z
2023-09-04T16:38:48Z
2023-09-04T16:38:47Z
severo
1,880,602,086
feat: 🎸 increase resources for admin service
also: yamlformat
feat: 🎸 increase resources for admin service: also: yamlformat
closed
2023-09-04T16:28:03Z
2023-09-04T16:29:52Z
2023-09-04T16:29:51Z
severo
1,880,533,496
More memory for workers
8GB -> 16GB Fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1758 Idk what kind of pods we have though, we might have to double check it's not too much It should be enough to perform the indexing of datasets (up to 5GB which is the max)
More memory for workers: 8GB -> 16GB Fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1758 Idk what kind of pods we have though, we might have to double check it's not too much It should be enough to perform the indexing of datasets (up to 5GB which is the max)
closed
2023-09-04T15:44:48Z
2023-09-04T16:33:04Z
2023-09-04T16:31:45Z
lhoestq
1,880,487,270
[split-duckdb-index] OOM for big datasets
e.g. dataset=c4, config=en, split=train has a partial parquet export of 5GB There also seems to be more than 12k cache entries with the same error code, over more than 3k datasets. This includes lots of big datasets (c4, oscar, wikipedia and all the variants by users) as well as many image and audio datasets.
[split-duckdb-index] OOM for big datasets: e.g. dataset=c4, config=en, split=train has a partial parquet export of 5GB There also seems to be more than 12k cache entries with the same error code, over more than 3k datasets. This includes lots of big datasets (c4, oscar, wikipedia and all the variants by users) as well as many image and audio datasets.
closed
2023-09-04T15:13:29Z
2023-10-05T15:30:03Z
2023-10-05T15:30:02Z
lhoestq
1,880,367,756
ci: 🎡 upgrade github action
fixes https://github.com/huggingface/datasets-server/issues/1756?
ci: 🎡 upgrade github action: fixes https://github.com/huggingface/datasets-server/issues/1756?
closed
2023-09-04T14:06:20Z
2023-09-04T14:11:12Z
2023-09-04T14:09:54Z
severo
1,880,349,116
The CI is broken. Due to actions/checkout release?
See https://github.com/actions/checkout/releases. They release v4 one hour ago See errors in the CI here: https://github.com/huggingface/datasets-server/actions/runs/6074148922
The CI is broken. Due to actions/checkout release?: See https://github.com/actions/checkout/releases. They release v4 one hour ago See errors in the CI here: https://github.com/huggingface/datasets-server/actions/runs/6074148922
closed
2023-09-04T13:57:25Z
2023-09-04T14:09:56Z
2023-09-04T14:09:55Z
severo
1,880,314,164
[refactor] extract split name from the URL
We use the same code [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-b31b94865cf356d2b241fc7f38f98bd9484d61464a991d06364bdf82c2c5ba4eR292), [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-51b62bca3846b24d6554358a34bd39d0db4db7f24e378182fde2d549e2d1268bR353) and [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR131). It could be factorized.
[refactor] extract split name from the URL: We use the same code [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-b31b94865cf356d2b241fc7f38f98bd9484d61464a991d06364bdf82c2c5ba4eR292), [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-51b62bca3846b24d6554358a34bd39d0db4db7f24e378182fde2d549e2d1268bR353) and [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR131). It could be factorized.
closed
2023-09-04T13:39:09Z
2024-01-24T10:01:29Z
2024-01-24T10:01:29Z
severo
1,880,286,071
Remove PARQUET_AND_INFO_BLOCKED_DATASETS
See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374 > should we remove the other block list (only used in config-parquet-and-info) ?
Remove PARQUET_AND_INFO_BLOCKED_DATASETS: See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374 > should we remove the other block list (only used in config-parquet-and-info) ?
closed
2023-09-04T13:24:21Z
2024-02-06T15:06:40Z
2024-02-06T15:06:40Z
severo
1,880,283,005
Delete existing cache entries for blocked datasets
Maybe in a cronjob, or in a migration every time the block list has changed? See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374
Delete existing cache entries for blocked datasets: Maybe in a cronjob, or in a migration every time the block list has changed? See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374
closed
2023-09-04T13:22:46Z
2024-02-06T15:03:03Z
2024-02-06T15:03:02Z
severo
1,880,273,596
Dataset viewer fails if there is a split with no examples
See for example https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ast/train This is due to the dataset info not being filled for the empty splits Should be fixed in `datasets` with https://github.com/huggingface/datasets/pull/6211
Dataset viewer fails if there is a split with no examples: See for example https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ast/train This is due to the dataset info not being filled for the empty splits Should be fixed in `datasets` with https://github.com/huggingface/datasets/pull/6211
closed
2023-09-04T13:17:53Z
2023-09-25T09:09:14Z
2023-09-25T09:09:14Z
lhoestq
1,880,232,015
Add blockedDatasets variable
and block `alexandrainst/nota`, see [internal slack thread](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1693548446825959) The idea is to never process anything for blocked datasets. I also chose to not store anything in the cache in this case
Add blockedDatasets variable: and block `alexandrainst/nota`, see [internal slack thread](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1693548446825959) The idea is to never process anything for blocked datasets. I also chose to not store anything in the cache in this case
closed
2023-09-04T12:56:30Z
2023-09-04T13:32:07Z
2023-09-04T13:32:06Z
lhoestq
1,879,934,806
Index partial parquet
It was failing to get the parquet files because it was not using the "partial-" split directory prefix for partially exported data Fix https://github.com/huggingface/datasets-server/issues/1749
Index partial parquet: It was failing to get the parquet files because it was not using the "partial-" split directory prefix for partially exported data Fix https://github.com/huggingface/datasets-server/issues/1749
closed
2023-09-04T10:03:19Z
2023-09-04T16:00:42Z
2023-09-04T12:54:57Z
lhoestq
1,879,827,999
Indexing fails for partial splits
because it tries to get the parquet files from the e.g. `config/train` directory instead of `config/partial-train`. Some impacted datasets: common voice 11, c4
Indexing fails for partial splits: because it tries to get the parquet files from the e.g. `config/train` directory instead of `config/partial-train`. Some impacted datasets: common voice 11, c4
closed
2023-09-04T09:00:31Z
2023-09-04T13:36:36Z
2023-09-04T12:54:59Z
lhoestq
1,877,133,876
Don't call the Hub datasets /tree endpoint with expand=True
This puts a lot of pressure on the Hub, and can even break it for big datasets This is because the Hub gets the lastCommit for each file, and somehow the implementation is sort of n^2 apparently. Calling /tree with `expand=True` can happen in this cascade of events invoving `hffs` (aka `huggingface_hub.hf_file_system.HfFileSystem`): 1. a dataset without script is updated 2. the config-parquet-and-info job starts 3. data files are resolved using `hffs.glob` 3a. fsspec calls `hffs.find` and then `hffs.ls` with `detail=True` (even if `glob` has `detail=False`) 3b. `hffs` calls /tree in with pagination and `expand=True` 4. caching information like ETag are obtained using `hffs.info` in `datasets.data_files._get_single_origin_metadata` 4a. `hffs` calls `hffs.ls` with `detail=True` on the parent directory 4b. `hffs` calls /tree with pagination and `expend=True` One way to stop using `expand=True` in `datasets-server` altogether is to mock `hffs.ls` to not use `expand=True`. We also need to replace the file information (currently obtained using `expand=True`) in `hffs.info` to not use `expand=True`. This is only used to get the `ETag` to check if a dataset has changed between two commits and possibly reuse generated parquet files but it should be fine. We could use the `oid` instead. cc @Wauplin @mariosasko for visibility, though it might be too specific to be handled in `hfh` directly cc @XciD who reported it
Don't call the Hub datasets /tree endpoint with expand=True: This puts a lot of pressure on the Hub, and can even break it for big datasets This is because the Hub gets the lastCommit for each file, and somehow the implementation is sort of n^2 apparently. Calling /tree with `expand=True` can happen in this cascade of events invoving `hffs` (aka `huggingface_hub.hf_file_system.HfFileSystem`): 1. a dataset without script is updated 2. the config-parquet-and-info job starts 3. data files are resolved using `hffs.glob` 3a. fsspec calls `hffs.find` and then `hffs.ls` with `detail=True` (even if `glob` has `detail=False`) 3b. `hffs` calls /tree in with pagination and `expand=True` 4. caching information like ETag are obtained using `hffs.info` in `datasets.data_files._get_single_origin_metadata` 4a. `hffs` calls `hffs.ls` with `detail=True` on the parent directory 4b. `hffs` calls /tree with pagination and `expend=True` One way to stop using `expand=True` in `datasets-server` altogether is to mock `hffs.ls` to not use `expand=True`. We also need to replace the file information (currently obtained using `expand=True`) in `hffs.info` to not use `expand=True`. This is only used to get the `ETag` to check if a dataset has changed between two commits and possibly reuse generated parquet files but it should be fine. We could use the `oid` instead. cc @Wauplin @mariosasko for visibility, though it might be too specific to be handled in `hfh` directly cc @XciD who reported it
closed
2023-09-01T10:04:59Z
2024-02-06T15:04:04Z
2024-02-06T15:04:04Z
lhoestq
1,875,919,722
fix: Error response in rows when cache is failed
Fix for rows issue in https://github.com/huggingface/datasets-server/issues/1661 When cache parquet failed, it returned 404 instead of 500. Now, it returns the detailed error as in /search.
fix: Error response in rows when cache is failed: Fix for rows issue in https://github.com/huggingface/datasets-server/issues/1661 When cache parquet failed, it returned 404 instead of 500. Now, it returns the detailed error as in /search.
closed
2023-08-31T17:03:44Z
2023-09-04T12:52:28Z
2023-09-04T12:52:27Z
AndreaFrancis
1,875,212,109
Ignore gitpython vuln again
same as https://github.com/huggingface/datasets-server/pull/1744
Ignore gitpython vuln again: same as https://github.com/huggingface/datasets-server/pull/1744
closed
2023-08-31T10:09:19Z
2023-09-04T18:10:57Z
2023-08-31T10:18:19Z
lhoestq
1,873,854,978
Improve error messages content in `split-descriptive-statistics`
I've found errors in statistics computation in cache database and want to debug them but the error message doesn't contain feature names so it's harder
Improve error messages content in `split-descriptive-statistics`: I've found errors in statistics computation in cache database and want to debug them but the error message doesn't contain feature names so it's harder
closed
2023-08-30T14:56:35Z
2023-09-01T12:53:10Z
2023-09-01T12:53:09Z
polinaeterna
1,873,753,399
ignore gitpython vuln
null
ignore gitpython vuln:
closed
2023-08-30T14:03:15Z
2023-08-31T10:03:41Z
2023-08-31T10:03:39Z
lhoestq
1,873,695,358
Support audio in rows and search
Using this code I can run `to_rows_list` in less than 2sec on 100 wav files of 1.5MB (from 20sec before). I chose to stop converting to WAV and only send the MP3 file to the user, and to parallelize the audio conversions. This should enable the viewer on audio datasets for both /rows and /search :) Though to try this feature on /rows I first enabled it on `arabic_speech_corpus` and will enable it on all datasets if it works fine Note: this can probably be improved later, e.g. by avoiding unnecessary conversions related to https://github.com/huggingface/datasets-server/issues/1255
Support audio in rows and search: Using this code I can run `to_rows_list` in less than 2sec on 100 wav files of 1.5MB (from 20sec before). I chose to stop converting to WAV and only send the MP3 file to the user, and to parallelize the audio conversions. This should enable the viewer on audio datasets for both /rows and /search :) Though to try this feature on /rows I first enabled it on `arabic_speech_corpus` and will enable it on all datasets if it works fine Note: this can probably be improved later, e.g. by avoiding unnecessary conversions related to https://github.com/huggingface/datasets-server/issues/1255
closed
2023-08-30T13:34:25Z
2023-09-04T16:37:33Z
2023-09-01T10:24:24Z
lhoestq
1,871,944,668
Support Search for datasets on the first 5GB of big datasets
If a dataset is in parquet format the viewer shows the full dataset but search is disabled. In this case it would be nice to support search anyway, at least on the first 5GB.
Support Search for datasets on the first 5GB of big datasets: If a dataset is in parquet format the viewer shows the full dataset but search is disabled. In this case it would be nice to support search anyway, at least on the first 5GB.
closed
2023-08-29T15:41:42Z
2023-11-07T09:52:28Z
2023-11-07T09:52:28Z
lhoestq
1,871,752,474
Add error message in admin app
when the dataset status query returns errors (eg today when whoami-v2 was down)
Add error message in admin app: when the dataset status query returns errors (eg today when whoami-v2 was down)
closed
2023-08-29T14:05:22Z
2023-08-29T14:16:45Z
2023-08-29T14:16:44Z
lhoestq
1,871,720,950
Audio feature is not displayed correctly on the first page (not in pagination)
It says `Not supported with pagination yet`, example: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only As far as I understand it is an expected behavior for pages starting from 2 page because audio was [intentionally disabled](https://github.com/huggingface/datasets-server/issues/1255) for `/rows` but it should be displayed in the first 100 rows.
Audio feature is not displayed correctly on the first page (not in pagination): It says `Not supported with pagination yet`, example: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only As far as I understand it is an expected behavior for pages starting from 2 page because audio was [intentionally disabled](https://github.com/huggingface/datasets-server/issues/1255) for `/rows` but it should be displayed in the first 100 rows.
closed
2023-08-29T13:49:28Z
2023-09-08T09:27:44Z
2023-09-08T09:27:44Z
polinaeterna
1,870,640,562
Add endpoint /hub-cache
The Hub's backend will use it to get the status of all the datasets. It will replace /valid, with the benefit that it's paginated, so the response should not timeout. Also: the idea is to call this endpoint only at Hub's backend startup. Then, we plan to update the statuses with server-sent events (https://github.com/huggingface/datasets-server/issues/1718). Missing: - [x] an error when setting the header: `h11._util.LocalProtocolError: Illegal header value b'<http://api:9080/hub-cache?cursor=64ed29b48eba1f585b71776d>;rel="next" '` - [x] btw: it's not the URL we want (we want the public URL, not the API pod's URL) - [x] OpenAPI
Add endpoint /hub-cache: The Hub's backend will use it to get the status of all the datasets. It will replace /valid, with the benefit that it's paginated, so the response should not timeout. Also: the idea is to call this endpoint only at Hub's backend startup. Then, we plan to update the statuses with server-sent events (https://github.com/huggingface/datasets-server/issues/1718). Missing: - [x] an error when setting the header: `h11._util.LocalProtocolError: Illegal header value b'<http://api:9080/hub-cache?cursor=64ed29b48eba1f585b71776d>;rel="next" '` - [x] btw: it's not the URL we want (we want the public URL, not the API pod's URL) - [x] OpenAPI
closed
2023-08-28T23:23:20Z
2023-09-05T15:46:21Z
2023-09-05T15:45:49Z
severo
1,870,085,369
Raise custom disk error in job runners with cache when `PermissionError` is raised
related to https://github.com/huggingface/datasets-server/issues/1583
Raise custom disk error in job runners with cache when `PermissionError` is raised: related to https://github.com/huggingface/datasets-server/issues/1583
closed
2023-08-28T16:32:21Z
2023-08-29T09:23:16Z
2023-08-29T09:23:14Z
polinaeterna
1,869,914,037
fix tailscale
null
fix tailscale:
closed
2023-08-28T14:47:24Z
2023-08-28T15:26:39Z
2023-08-28T14:49:34Z
glegendre01
1,869,890,323
feat: 🎸 increase the number of pods for /search
The search is now in production in the Hub, so we need to increase the number of pods
feat: 🎸 increase the number of pods for /search: The search is now in production in the Hub, so we need to increase the number of pods
closed
2023-08-28T14:34:14Z
2023-08-28T14:54:40Z
2023-08-28T14:54:39Z
severo
1,869,694,508
Search for nested data
Duckdb does support indexing nested data so we're all good on that side already. Therefore I simply improved the indexable column detection to check for nested data. The old code used to index all the columns as long as there is at least one non-nested string column though, so we just need to refresh the datasets with the "no indexable columns" error. Close https://github.com/huggingface/datasets-server/issues/1710
Search for nested data: Duckdb does support indexing nested data so we're all good on that side already. Therefore I simply improved the indexable column detection to check for nested data. The old code used to index all the columns as long as there is at least one non-nested string column though, so we just need to refresh the datasets with the "no indexable columns" error. Close https://github.com/huggingface/datasets-server/issues/1710
closed
2023-08-28T12:45:17Z
2023-09-04T18:11:19Z
2023-08-31T22:57:01Z
lhoestq
1,869,282,369
Block open-llm-leaderboard
The 800+ datasets with 60+ configs each have been updated which has filled up the queue to the point that the other datasets are not processed as fast as they should. Blocking them for now, until we have a better way to handle them
Block open-llm-leaderboard: The 800+ datasets with 60+ configs each have been updated which has filled up the queue to the point that the other datasets are not processed as fast as they should. Blocking them for now, until we have a better way to handle them
closed
2023-08-28T08:36:21Z
2023-08-28T16:14:58Z
2023-08-28T16:14:57Z
lhoestq
1,867,717,638
Add API fuzzer to the tests?
Tools exist, see https://openapi.tools/
Add API fuzzer to the tests?: Tools exist, see https://openapi.tools/
closed
2023-08-25T21:44:10Z
2023-10-04T15:04:16Z
2023-10-04T15:04:16Z
severo
1,867,702,913
feat: 🎸 create step dataset-hub-cache
A new step, specific to the Hub (i.e. it will not be backward-compatible), to help the Hub have a cache of the information it needs for each dataset Note that it's the first step and endpoint that is specific to the Hub. I think we should have more of them (we can use the `-hub` prefix to make it clear in the code). In particular, instead of having N calls from the backend to create the dataset viewer page, we should have only one, well formatted: - We reduce the number of calls - We prepare the data beforehand, and it's cached This PR will be followed by another one that creates the /hub-cache API endpoint. It will replace `/valid`, and will return a paginated list of `dataset-hub-cache` responses, ordered by `updated_at`. We will try to have a SLO of the same order as the other endpoints (85% under 250ms), even if it means having small pages. Anyway, this endpoint will only be used to warm the Hub cache, and then we will switch to the ([still to be implemented](https://github.com/huggingface/datasets-server/issues/1718)) Server-Sent Events endpoint /hub-cache-sse that will send the content of `dataset-hub-cache` on every update.
feat: 🎸 create step dataset-hub-cache: A new step, specific to the Hub (i.e. it will not be backward-compatible), to help the Hub have a cache of the information it needs for each dataset Note that it's the first step and endpoint that is specific to the Hub. I think we should have more of them (we can use the `-hub` prefix to make it clear in the code). In particular, instead of having N calls from the backend to create the dataset viewer page, we should have only one, well formatted: - We reduce the number of calls - We prepare the data beforehand, and it's cached This PR will be followed by another one that creates the /hub-cache API endpoint. It will replace `/valid`, and will return a paginated list of `dataset-hub-cache` responses, ordered by `updated_at`. We will try to have a SLO of the same order as the other endpoints (85% under 250ms), even if it means having small pages. Anyway, this endpoint will only be used to warm the Hub cache, and then we will switch to the ([still to be implemented](https://github.com/huggingface/datasets-server/issues/1718)) Server-Sent Events endpoint /hub-cache-sse that will send the content of `dataset-hub-cache` on every update.
closed
2023-08-25T21:24:31Z
2023-08-28T18:48:03Z
2023-08-28T16:16:39Z
severo
1,867,526,809
add `truncated` field to /first-rows
On the Hub's side, we have to guess if the result has been truncated. For example, if the result has 35 rows: maybe it has not been truncated because the split only had 35 rows, or maybe it has been truncated because the cells contain heavy data -> *we cannot detect*. It would be more explicit to return `truncated: boolean` in the response.
add `truncated` field to /first-rows: On the Hub's side, we have to guess if the result has been truncated. For example, if the result has 35 rows: maybe it has not been truncated because the split only had 35 rows, or maybe it has been truncated because the cells contain heavy data -> *we cannot detect*. It would be more explicit to return `truncated: boolean` in the response.
closed
2023-08-25T18:39:57Z
2023-09-21T15:22:43Z
2023-09-21T15:22:43Z
severo
1,867,450,765
fix: Change score alias name in FST query
Fix for https://github.com/huggingface/datasets-server/issues/1729
fix: Change score alias name in FST query: Fix for https://github.com/huggingface/datasets-server/issues/1729
closed
2023-08-25T17:38:42Z
2023-08-25T19:38:34Z
2023-08-25T19:38:33Z
AndreaFrancis
1,867,289,353
/search num_rows_total field is incoherent
In the following example, num_rows_total=1 (ie. the total number of results for that search) while the `rows` field is an array of 100 rows https://datasets-server.huggingface.co/search?dataset=loubnabnl/gpt4-1k-annotations&config=default&split=train&query=pokemon&offset=0&limit=100
/search num_rows_total field is incoherent: In the following example, num_rows_total=1 (ie. the total number of results for that search) while the `rows` field is an array of 100 rows https://datasets-server.huggingface.co/search?dataset=loubnabnl/gpt4-1k-annotations&config=default&split=train&query=pokemon&offset=0&limit=100
closed
2023-08-25T15:55:46Z
2023-08-25T20:28:30Z
2023-08-25T20:28:30Z
severo
1,867,180,294
Add TTL for unicity_id locks
Added the `ttl` parameter to `lock()`. (ony supported value is 600 though, which is the value of the TTL index in mongo - but this extendable for later if needed) Close https://github.com/huggingface/datasets-server/issues/1727
Add TTL for unicity_id locks: Added the `ttl` parameter to `lock()`. (ony supported value is 600 though, which is the value of the TTL index in mongo - but this extendable for later if needed) Close https://github.com/huggingface/datasets-server/issues/1727
closed
2023-08-25T14:41:15Z
2023-08-27T16:11:22Z
2023-08-27T16:11:21Z
lhoestq
1,866,947,915
Locks sometimes block all the workers
It can happen that the job from `Queue().get_next_waiting_job()` is wrongly locked, which can make the workers fail to start a new job. A way to fix this is to find a way to not have wrong locks or simply to ignore locked jobs in `Queue().get_next_waiting_job()`.
Locks sometimes block all the workers: It can happen that the job from `Queue().get_next_waiting_job()` is wrongly locked, which can make the workers fail to start a new job. A way to fix this is to find a way to not have wrong locks or simply to ignore locked jobs in `Queue().get_next_waiting_job()`.
closed
2023-08-25T12:15:10Z
2023-08-27T16:11:22Z
2023-08-27T16:11:22Z
lhoestq
1,865,782,924
Cached assets to s3
The first part of https://github.com/huggingface/datasets-server/issues/1406 Migration for cached-assets enabling only for a list of datasets initially (I added "asoria/image" dataset for the first testing, later, we can remove this logic and apply it for all datasets). Note that the new logic: validates if the file exists in the bucket and if not, uploads it. This might increase latency, but will see how it works once it is deployed on prod, for now, I just added a couple of tests to ensure behavior. All /rows and /search resources will be stored in` /cached-assets` subfolder in the bucket and the infra team will help us configure a TTL policy at the folder level to delete the files created more than a day before. Notice that I decided to apply the same approach as Audio for Images because of: - We are doing mostly a IO-bound task - According to best practices, it is better to use multithreading when working with IO-bound tasks - This doc helped me to understand that we are taking the best approach initially for parallelizing requests to S3 https://superfastpython.com/threadpool-vs-pool-in-python/ > You should use the ThreadPool for IO-bound tasks in Python in general. > - Reading or writing a file from the hard drive. > - Downloading or uploading a file. Also, I compared working /rows + s3 with threa_map vs multiprocessing.Pool and had the following results (might be affected by my internet connection but at least it can give us an idea about comparison): **For an audio dataset https://huggingface.co/datasets/asoria/audio-sample:** Multithreading (thread_map) - First time (all files are uploaded to the bucket) = 7s - Second time (only validation for existing files is done) = 4s Multiprocessing (multiprocessing.Pool) - First time (all files are uploaded to the bucket) = 14s - Second time (only validation for existing files is done) = 5s **For an Image dataset https://huggingface.co/datasets/asoria/image:** Multithreading (thread_map) - First time (all files are uploaded to the bucket) = 6s - Second time (only validation for existing files is done) = 3s Multiprocessing (multiprocessing.Pool) - First time (all files are uploaded to the bucket) = 15s - Second time (only validation for existing files is done) = 7s
Cached assets to s3: The first part of https://github.com/huggingface/datasets-server/issues/1406 Migration for cached-assets enabling only for a list of datasets initially (I added "asoria/image" dataset for the first testing, later, we can remove this logic and apply it for all datasets). Note that the new logic: validates if the file exists in the bucket and if not, uploads it. This might increase latency, but will see how it works once it is deployed on prod, for now, I just added a couple of tests to ensure behavior. All /rows and /search resources will be stored in` /cached-assets` subfolder in the bucket and the infra team will help us configure a TTL policy at the folder level to delete the files created more than a day before. Notice that I decided to apply the same approach as Audio for Images because of: - We are doing mostly a IO-bound task - According to best practices, it is better to use multithreading when working with IO-bound tasks - This doc helped me to understand that we are taking the best approach initially for parallelizing requests to S3 https://superfastpython.com/threadpool-vs-pool-in-python/ > You should use the ThreadPool for IO-bound tasks in Python in general. > - Reading or writing a file from the hard drive. > - Downloading or uploading a file. Also, I compared working /rows + s3 with threa_map vs multiprocessing.Pool and had the following results (might be affected by my internet connection but at least it can give us an idea about comparison): **For an audio dataset https://huggingface.co/datasets/asoria/audio-sample:** Multithreading (thread_map) - First time (all files are uploaded to the bucket) = 7s - Second time (only validation for existing files is done) = 4s Multiprocessing (multiprocessing.Pool) - First time (all files are uploaded to the bucket) = 14s - Second time (only validation for existing files is done) = 5s **For an Image dataset https://huggingface.co/datasets/asoria/image:** Multithreading (thread_map) - First time (all files are uploaded to the bucket) = 6s - Second time (only validation for existing files is done) = 3s Multiprocessing (multiprocessing.Pool) - First time (all files are uploaded to the bucket) = 15s - Second time (only validation for existing files is done) = 7s
closed
2023-08-24T19:47:56Z
2023-09-27T12:39:46Z
2023-09-27T12:39:45Z
AndreaFrancis
1,865,396,120
Use features in search
The search endpoint was using the feature types from the arrow table returned by duckdb, which doesn't contain any metadata about the Image type. So I added a `features` field to the `split-duckdb-index` job to store the feature types that the search endpoint can use to correctly load the image data. I added a migration job to fill this field with `None` and I will simply update the `split-duckdb-index` job for image datasets. Fix https://github.com/huggingface/datasets-server/issues/1713
Use features in search: The search endpoint was using the feature types from the arrow table returned by duckdb, which doesn't contain any metadata about the Image type. So I added a `features` field to the `split-duckdb-index` job to store the feature types that the search endpoint can use to correctly load the image data. I added a migration job to fill this field with `None` and I will simply update the `split-duckdb-index` job for image datasets. Fix https://github.com/huggingface/datasets-server/issues/1713
closed
2023-08-24T15:23:13Z
2023-09-07T11:06:21Z
2023-08-25T10:20:04Z
lhoestq
1,864,797,501
Block KakologArchives/KakologArchives
Has tons of data files and is updated every day. 12k commits already in https://huggingface.co/datasets/KakologArchives/KakologArchives/tree/refs%2Fconvert%2Fparquet Let's block it until we have a better way of handling big datasets with frequent updates
Block KakologArchives/KakologArchives: Has tons of data files and is updated every day. 12k commits already in https://huggingface.co/datasets/KakologArchives/KakologArchives/tree/refs%2Fconvert%2Fparquet Let's block it until we have a better way of handling big datasets with frequent updates
closed
2023-08-24T09:48:23Z
2023-08-24T15:16:45Z
2023-08-24T15:16:44Z
lhoestq
1,864,044,303
fix: 🐛 expose X-Error-Code and X-Revision headers to browser
Fixes #1722
fix: 🐛 expose X-Error-Code and X-Revision headers to browser: Fixes #1722
closed
2023-08-23T21:32:54Z
2023-08-23T21:41:58Z
2023-08-23T21:41:57Z
severo
1,864,027,896
Set the `Access-Control-Expose-Headers` header to allow access to X-Error-Code in the browser
Since the dataset viewer on the Hub lets us navigate in the pages of rows, and search, all in the browser, we need to let the browser code access the X-Error-Code header, to be able to handle the errors adequately. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers
Set the `Access-Control-Expose-Headers` header to allow access to X-Error-Code in the browser: Since the dataset viewer on the Hub lets us navigate in the pages of rows, and search, all in the browser, we need to let the browser code access the X-Error-Code header, to be able to handle the errors adequately. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers
closed
2023-08-23T21:20:09Z
2023-08-23T21:41:58Z
2023-08-23T21:41:58Z
severo
1,863,931,685
Increase resources and change queue metrics time
null
Increase resources and change queue metrics time:
closed
2023-08-23T19:59:03Z
2023-08-23T20:00:13Z
2023-08-23T20:00:12Z
AndreaFrancis
1,863,714,834
[docs] ClickHouse integration
A first draft for querying Hub datasets with ClickHouse :) - [x] Link to the blog post once it is published
[docs] ClickHouse integration: A first draft for querying Hub datasets with ClickHouse :) - [x] Link to the blog post once it is published
closed
2023-08-23T17:09:54Z
2023-09-24T20:13:27Z
2023-09-05T15:51:37Z
stevhliu
1,863,678,539
Download parquet files with huggingface_hub instead of duckdb in `split-descriptive-statistics`
will fix https://github.com/huggingface/datasets-server/issues/1712#issuecomment-1690029285
Download parquet files with huggingface_hub instead of duckdb in `split-descriptive-statistics`: will fix https://github.com/huggingface/datasets-server/issues/1712#issuecomment-1690029285
closed
2023-08-23T16:42:38Z
2023-08-25T14:45:28Z
2023-08-25T14:45:27Z
polinaeterna
1,862,142,672
Implement Server-sent events to update the Hub cache
The Hub needs to know which datasets have a viewer, or only a preview. Currently, we publish the /valid endpoint, which returns a list of all the dataset names that has the search capability, the viewer, or just the preview. It has two drawbacks: 1. it gives information about the gated datasets 2. it does not scale (the queries to the database take too much time, > 10s, even > 20s when the database is under load) The solution we will implement is to send Server-sent events to the Hub.
Implement Server-sent events to update the Hub cache: The Hub needs to know which datasets have a viewer, or only a preview. Currently, we publish the /valid endpoint, which returns a list of all the dataset names that has the search capability, the viewer, or just the preview. It has two drawbacks: 1. it gives information about the gated datasets 2. it does not scale (the queries to the database take too much time, > 10s, even > 20s when the database is under load) The solution we will implement is to send Server-sent events to the Hub.
closed
2023-08-22T20:18:25Z
2023-10-19T11:48:07Z
2023-10-19T11:48:06Z
severo
1,862,093,724
Download parquet in split-duckdb-index
Fixes https://github.com/huggingface/datasets-server/issues/1686 Using hf_download instead of loading parquet files to duckdb directly.
Download parquet in split-duckdb-index: Fixes https://github.com/huggingface/datasets-server/issues/1686 Using hf_download instead of loading parquet files to duckdb directly.
closed
2023-08-22T19:44:53Z
2023-08-23T13:52:41Z
2023-08-23T13:52:40Z
AndreaFrancis
1,861,723,307
Start job only if waiting
related to https://github.com/huggingface/datasets-server/issues/1467#issuecomment-1687104152
Start job only if waiting: related to https://github.com/huggingface/datasets-server/issues/1467#issuecomment-1687104152
closed
2023-08-22T15:36:37Z
2023-08-22T19:50:54Z
2023-08-22T19:50:53Z
lhoestq
1,859,664,571
Reduce resources
null
Reduce resources :
closed
2023-08-21T15:45:37Z
2023-08-21T15:47:05Z
2023-08-21T15:47:04Z
AndreaFrancis
1,859,099,518
Update admin app requirements.txt
was already updated in pyproject.toml
Update admin app requirements.txt: was already updated in pyproject.toml
closed
2023-08-21T10:43:06Z
2023-08-21T10:43:36Z
2023-08-21T10:43:35Z
lhoestq
1,859,080,092
Search doesn't always use Image type
It seems the feature type is not loaded correctly, resulting in a binary type that is ignored in the viewer e.g. https://datasets-server.huggingface.co/search?dataset=lambdalabs/pokemon-blip-captions&config=lambdalabs--pokemon-blip-captions&split=train&query=red&offset=0&limit=100 ``` "features": [ { "feature_idx": 0, "name": "image", "type": { "bytes": { "dtype": "binary", "_type": "Value" }, "path": { "dtype": "string", "_type": "Value" } } }, { "feature_idx": 1, "name": "text", "type": { "dtype": "string", "_type": "Value" } } ], "rows": [ { "row_idx": 0, "row": { "image": null, "text": "a drawing of a green pokemon with red eyes" }, "truncated_cells": [ ] }, { "row_idx": 1, "row": { "image": null, "text": "a green and yellow toy with a red nose" }, "truncated_cells": [ ] }, ``` cc @AndreaFrancis another example: https://datasets-server.huggingface.co/search?dataset=jmhessel/newyorker_caption_contest&config=explanation&split=train&query=hospital&offset=0&limit=100 another example: https://datasets-server.huggingface.co/search?dataset=ademax/ocr_scan_vi_01&config=default&split=train&query=pho&offset=0&limit=100
Search doesn't always use Image type: It seems the feature type is not loaded correctly, resulting in a binary type that is ignored in the viewer e.g. https://datasets-server.huggingface.co/search?dataset=lambdalabs/pokemon-blip-captions&config=lambdalabs--pokemon-blip-captions&split=train&query=red&offset=0&limit=100 ``` "features": [ { "feature_idx": 0, "name": "image", "type": { "bytes": { "dtype": "binary", "_type": "Value" }, "path": { "dtype": "string", "_type": "Value" } } }, { "feature_idx": 1, "name": "text", "type": { "dtype": "string", "_type": "Value" } } ], "rows": [ { "row_idx": 0, "row": { "image": null, "text": "a drawing of a green pokemon with red eyes" }, "truncated_cells": [ ] }, { "row_idx": 1, "row": { "image": null, "text": "a green and yellow toy with a red nose" }, "truncated_cells": [ ] }, ``` cc @AndreaFrancis another example: https://datasets-server.huggingface.co/search?dataset=jmhessel/newyorker_caption_contest&config=explanation&split=train&query=hospital&offset=0&limit=100 another example: https://datasets-server.huggingface.co/search?dataset=ademax/ocr_scan_vi_01&config=default&split=train&query=pho&offset=0&limit=100
closed
2023-08-21T10:30:33Z
2023-08-25T15:37:36Z
2023-08-25T10:20:05Z
lhoestq
1,858,969,064
Missing auth for `split-descriptive-statistics`
``` | INFO: 2023-08-21 09:23:35,871 - root - [split-descriptive-statistics] compute JobManager(job_id=64dd8b2e4833e19c97d7c0db dataset=mozilla-foundation/common_voice_9_0 job_info={'job_id': '64dd8b2e4833e19c97d7c0db', 'type': 'split-descriptiv │ │ INFO: 2023-08-21 09:23:35,879 - root - Compute descriptive statistics for dataset='mozilla-foundation/common_voice_9_0', config='pt', split='invalidated' │ │ INFO: 2023-08-21 09:23:36,440 - root - Downloading remote data to a local parquet file /storage/stats-cache/74577196210609-split-descriptive-statistics-mozilla-foundation-c-1d56685e/dataset.parquet. │ │ ERROR: 2023-08-21 09:23:37,038 - root - HTTP Error: HTTP GET error on 'https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0/resolve/refs%2Fconvert%2Fparquet/pt/invalidated/0000.parquet' (HTTP 401) │ │ Traceback (most recent call last): │ │ File "/src/services/worker/src/worker/job_manager.py", line 160, in process │ │ job_result = self.job_runner.compute() │ │ File "/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py", line 404, in compute │ │ compute_descriptive_statistics_response( │ │ File "/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py", line 323, in compute_descriptive_statistics_response │ │ con.sql( │ │ duckdb.HTTPException: HTTP Error: HTTP GET error on 'https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0/resolve/refs%2Fconvert%2Fparquet/pt/invalidated/0000.parquet' (HTTP 401) ```
Missing auth for `split-descriptive-statistics`: ``` | INFO: 2023-08-21 09:23:35,871 - root - [split-descriptive-statistics] compute JobManager(job_id=64dd8b2e4833e19c97d7c0db dataset=mozilla-foundation/common_voice_9_0 job_info={'job_id': '64dd8b2e4833e19c97d7c0db', 'type': 'split-descriptiv │ │ INFO: 2023-08-21 09:23:35,879 - root - Compute descriptive statistics for dataset='mozilla-foundation/common_voice_9_0', config='pt', split='invalidated' │ │ INFO: 2023-08-21 09:23:36,440 - root - Downloading remote data to a local parquet file /storage/stats-cache/74577196210609-split-descriptive-statistics-mozilla-foundation-c-1d56685e/dataset.parquet. │ │ ERROR: 2023-08-21 09:23:37,038 - root - HTTP Error: HTTP GET error on 'https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0/resolve/refs%2Fconvert%2Fparquet/pt/invalidated/0000.parquet' (HTTP 401) │ │ Traceback (most recent call last): │ │ File "/src/services/worker/src/worker/job_manager.py", line 160, in process │ │ job_result = self.job_runner.compute() │ │ File "/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py", line 404, in compute │ │ compute_descriptive_statistics_response( │ │ File "/src/services/worker/src/worker/job_runners/split/descriptive_statistics.py", line 323, in compute_descriptive_statistics_response │ │ con.sql( │ │ duckdb.HTTPException: HTTP Error: HTTP GET error on 'https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0/resolve/refs%2Fconvert%2Fparquet/pt/invalidated/0000.parquet' (HTTP 401) ```
closed
2023-08-21T09:25:14Z
2023-08-25T14:45:28Z
2023-08-25T14:45:28Z
lhoestq
1,857,300,627
New attempt jwt array
I restore the list of public keys for JWT. But now, we test at startup if the format of the keys is good
New attempt jwt array: I restore the list of public keys for JWT. But now, we test at startup if the format of the keys is good
closed
2023-08-18T21:16:25Z
2023-08-18T21:23:11Z
2023-08-18T21:23:10Z
severo
1,856,954,754
Enable Duckdb index on nested texts
E.g. for dialog datasets with features ```python Features({ "conversations": [{"from": Value("string"), "value": Value("string")}] }) ``` like https://huggingface.co/datasets/LDJnr/Puffin
Enable Duckdb index on nested texts: E.g. for dialog datasets with features ```python Features({ "conversations": [{"from": Value("string"), "value": Value("string")}] }) ``` like https://huggingface.co/datasets/LDJnr/Puffin
closed
2023-08-18T16:01:39Z
2023-08-31T22:57:02Z
2023-08-31T22:57:02Z
lhoestq
1,856,950,887
Revert "Create jwt array again (#1708)"
This reverts commit 45cd1298b62f8f923eb6bb7763ef8824a397e242.
Revert "Create jwt array again (#1708)": This reverts commit 45cd1298b62f8f923eb6bb7763ef8824a397e242.
closed
2023-08-18T15:58:33Z
2023-08-18T15:59:08Z
2023-08-18T15:58:38Z
severo
1,856,874,379
Create jwt array again
A Helm chart had a bad indentation
Create jwt array again: A Helm chart had a bad indentation
closed
2023-08-18T15:03:15Z
2023-08-18T15:03:54Z
2023-08-18T15:03:54Z
severo
1,856,854,450
Revert both
null
Revert both:
closed
2023-08-18T14:50:29Z
2023-08-18T14:51:15Z
2023-08-18T14:50:34Z
severo
1,856,850,156
Revert "Add unique compound index to cache metric (#1703)"
This reverts commit ab99a259cbf9f961a5286a583239db8d50677e8e.
Revert "Add unique compound index to cache metric (#1703)": This reverts commit ab99a259cbf9f961a5286a583239db8d50677e8e.
closed
2023-08-18T14:48:19Z
2023-08-18T14:48:26Z
2023-08-18T14:48:25Z
severo
1,855,734,092
Redirect the API root to the docs
ie. https://datasets-server.huggingface.co/ => https://huggingface.co/docs/datasets-server
Redirect the API root to the docs: ie. https://datasets-server.huggingface.co/ => https://huggingface.co/docs/datasets-server
open
2023-08-17T21:27:51Z
2024-02-06T15:02:04Z
null
severo
1,855,726,623
Use multiple keys for jwt decoding
See https://github.com/huggingface/moon-landing/pull/7202 (internal) Note that I created the secrets in the infra.
Use multiple keys for jwt decoding: See https://github.com/huggingface/moon-landing/pull/7202 (internal) Note that I created the secrets in the infra.
closed
2023-08-17T21:20:06Z
2023-08-18T13:58:17Z
2023-08-18T13:58:16Z
severo
1,855,696,306
Add unique compound index to cache metric
null
Add unique compound index to cache metric:
closed
2023-08-17T20:53:13Z
2023-08-17T21:14:00Z
2023-08-17T21:13:58Z
AndreaFrancis
1,855,581,642
Temporarily delete cache metric index
It will be added again in the next deploy as: ``` { "fields":["kind", "http_status", "error_code"], "unique": True } ``` (In another PR)
Temporarily delete cache metric index: It will be added again in the next deploy as: ``` { "fields":["kind", "http_status", "error_code"], "unique": True } ``` (In another PR)
closed
2023-08-17T19:30:04Z
2023-08-17T20:32:20Z
2023-08-17T20:32:19Z
AndreaFrancis
1,855,558,409
Set collect cache metrics as default schedule
null
Set collect cache metrics as default schedule:
closed
2023-08-17T19:10:54Z
2023-08-17T19:15:47Z
2023-08-17T19:15:46Z
AndreaFrancis
1,855,400,835
Remove unique index in cacheTotalMetric
null
Remove unique index in cacheTotalMetric:
closed
2023-08-17T17:18:46Z
2023-08-17T17:20:21Z
2023-08-17T17:20:20Z
AndreaFrancis
1,855,335,729
Fix start job lock owner
Related to https://github.com/huggingface/datasets-server/pull/1420 fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1467
Fix start job lock owner: Related to https://github.com/huggingface/datasets-server/pull/1420 fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1467
closed
2023-08-17T16:34:01Z
2023-08-17T18:05:11Z
2023-08-17T18:05:10Z
lhoestq
1,855,309,223
Rollback queue incremental metrics
Currently, queue metrics are getting wrong values and sometimes negative. It could be related to an issue with jobs being processed more than one time by different workers https://github.com/huggingface/datasets-server/issues/1467. Rollbacking incremental queue metrics until job processing issues have been solved. ![Screenshot from 2023-08-17 12-14-22](https://github.com/huggingface/datasets-server/assets/5564745/dc0a0722-eeb3-46eb-927e-7155dc53bae3) Note that I changed the cron schedule back to two minutes (as it was before https://github.com/huggingface/datasets-server/pull/1684)
Rollback queue incremental metrics: Currently, queue metrics are getting wrong values and sometimes negative. It could be related to an issue with jobs being processed more than one time by different workers https://github.com/huggingface/datasets-server/issues/1467. Rollbacking incremental queue metrics until job processing issues have been solved. ![Screenshot from 2023-08-17 12-14-22](https://github.com/huggingface/datasets-server/assets/5564745/dc0a0722-eeb3-46eb-927e-7155dc53bae3) Note that I changed the cron schedule back to two minutes (as it was before https://github.com/huggingface/datasets-server/pull/1684)
closed
2023-08-17T16:16:52Z
2023-08-17T16:31:51Z
2023-08-17T16:31:51Z
AndreaFrancis
1,855,162,947
Add unique constraint to CacheTotalMetric
null
Add unique constraint to CacheTotalMetric:
closed
2023-08-17T14:50:04Z
2023-08-17T15:10:34Z
2023-08-17T15:10:33Z
AndreaFrancis
1,853,962,474
delete obsolete cache records
When running dataset-config-names force-refresh for datasets with only one config for https://github.com/huggingface/datasets-server/issues/1550 , I found that for dataset `triple-t/dummy`, the previous config remains in the db `triple-t/dummy.` All cache records related to `triple--t/dummy` should have been removed. See ![Screenshot from 2023-08-16 18-05-54](https://github.com/huggingface/datasets-server/assets/5564745/259711df-2f53-4cd1-b6e9-6bc53b4f1a00) We could do it after previous job has finished.
delete obsolete cache records: When running dataset-config-names force-refresh for datasets with only one config for https://github.com/huggingface/datasets-server/issues/1550 , I found that for dataset `triple-t/dummy`, the previous config remains in the db `triple-t/dummy.` All cache records related to `triple--t/dummy` should have been removed. See ![Screenshot from 2023-08-16 18-05-54](https://github.com/huggingface/datasets-server/assets/5564745/259711df-2f53-4cd1-b6e9-6bc53b4f1a00) We could do it after previous job has finished.
closed
2023-08-16T22:03:29Z
2023-09-15T13:47:57Z
2023-09-15T13:47:18Z
AndreaFrancis
1,853,860,310
Lock queue metrics while update
null
Lock queue metrics while update:
closed
2023-08-16T20:27:39Z
2023-08-17T13:56:35Z
2023-08-17T13:56:34Z
AndreaFrancis
1,853,635,877
Load a parquet export with `pyarrow.parquet.ParquetDataset`
`read()` fails because it tries to load the `index.duckdb` file as a parquet file ```python from huggingface_hub import HfFileSystem import pyarrow.parquet as pq ds = pq.ParquetDataset("datasets/squad@~parquet", filesystem=HfFileSystem()).read() ``` raises ``` ArrowInvalid: Could not open Parquet input source 'datasets/squad@~parquet/plain_text/train/index.duckdb': Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` Shall we rename the `index.duckdb` file ? I think we can try renaming it to `_index.duckdb` or `.index.duckdb` ----------------------- Also `read_pandas()` also fails (this is what pandas uses when calling `pd.read_parquet`) ```python from huggingface_hub import HfFileSystem import pyarrow.parquet as pq df = pq.ParquetDataset("datasets/squad@~parquet", filesystem=HfFileSystem()).read_pandas() ``` raises ``` EntryNotFoundError: 404 Client Error. (Request ID: Root=1-64dd05ce-3f58691a7b2e8fd67c76425b) Entry Not Found for url: https://huggingface.co/api/datasets/squad/tree/~parquet/_common_metadata?expand=True. Tree Entry not found: _common_metadata ```
Load a parquet export with `pyarrow.parquet.ParquetDataset`: `read()` fails because it tries to load the `index.duckdb` file as a parquet file ```python from huggingface_hub import HfFileSystem import pyarrow.parquet as pq ds = pq.ParquetDataset("datasets/squad@~parquet", filesystem=HfFileSystem()).read() ``` raises ``` ArrowInvalid: Could not open Parquet input source 'datasets/squad@~parquet/plain_text/train/index.duckdb': Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` Shall we rename the `index.duckdb` file ? I think we can try renaming it to `_index.duckdb` or `.index.duckdb` ----------------------- Also `read_pandas()` also fails (this is what pandas uses when calling `pd.read_parquet`) ```python from huggingface_hub import HfFileSystem import pyarrow.parquet as pq df = pq.ParquetDataset("datasets/squad@~parquet", filesystem=HfFileSystem()).read_pandas() ``` raises ``` EntryNotFoundError: 404 Client Error. (Request ID: Root=1-64dd05ce-3f58691a7b2e8fd67c76425b) Entry Not Found for url: https://huggingface.co/api/datasets/squad/tree/~parquet/_common_metadata?expand=True. Tree Entry not found: _common_metadata ```
closed
2023-08-16T17:21:54Z
2024-06-19T14:21:23Z
2024-06-19T14:21:23Z
lhoestq
1,853,558,203
feat: 🎸 allow passing JWT on authorization header + raise error is invalid
The authorization header must use the "jwt:" prefix, ie: `authorization: Bearer jwt:....token....` Fixes #1690 and #934. Tasks: - [x] allow jwt on authorization header - [x] return an error if the JWT is invalid - [x] add docs + openapi - [x] <strike>add e2e tests</strike> as we run the e2e against the CI Hub, to be able to generate valid JWTs, we would have to provide the CI Hub's private key. I think we will let it as is for now
feat: 🎸 allow passing JWT on authorization header + raise error is invalid: The authorization header must use the "jwt:" prefix, ie: `authorization: Bearer jwt:....token....` Fixes #1690 and #934. Tasks: - [x] allow jwt on authorization header - [x] return an error if the JWT is invalid - [x] add docs + openapi - [x] <strike>add e2e tests</strike> as we run the e2e against the CI Hub, to be able to generate valid JWTs, we would have to provide the CI Hub's private key. I think we will let it as is for now
closed
2023-08-16T16:22:22Z
2023-08-17T16:09:23Z
2023-08-17T16:08:48Z
severo
1,853,502,496
Increase config-parquet-and-info version
following #1685 this will update all the datasets and require lots of time and workers :)
Increase config-parquet-and-info version: following #1685 this will update all the datasets and require lots of time and workers :)
closed
2023-08-16T15:47:03Z
2023-08-16T15:55:53Z
2023-08-16T15:55:52Z
lhoestq
1,853,485,504
Parquet renames docs
Following https://github.com/huggingface/datasets-server/pull/1685
Parquet renames docs: Following https://github.com/huggingface/datasets-server/pull/1685
closed
2023-08-16T15:36:27Z
2023-08-17T18:42:02Z
2023-08-17T18:41:31Z
lhoestq
1,853,463,919
Return an error when the JWT is not valid
Currently, we silently ignore errors in the JWT and try other authentication mechanisms. Instead, we should return an error when the JWT is not valid. It will help trigger a JWT renewal, in particular. We should give the reason as the error_code (passed in the X-Error-Code header) to be able to discriminate between a key error, an algorithm error, or an expiration error, among others
Return an error when the JWT is not valid: Currently, we silently ignore errors in the JWT and try other authentication mechanisms. Instead, we should return an error when the JWT is not valid. It will help trigger a JWT renewal, in particular. We should give the reason as the error_code (passed in the X-Error-Code header) to be able to discriminate between a key error, an algorithm error, or an expiration error, among others
closed
2023-08-16T15:24:03Z
2023-08-17T16:08:49Z
2023-08-17T16:08:49Z
severo
1,853,370,773
Handle breaking change in google dependency?
See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616 Should we downgrade the dependency, or fix the datasets?
Handle breaking change in google dependency?: See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616 Should we downgrade the dependency, or fix the datasets?
closed
2023-08-16T14:31:28Z
2024-02-06T14:59:59Z
2024-02-06T14:59:59Z
severo
1,852,252,236
feat: 🎸 add num_rows_per_page in /rows and /search responses
also rename num_total_rows in num_rows_total BREAKING CHANGE: 🧨 field num_total_rows in /rows and /search has been renamed num_rows_total fixes #1687
feat: 🎸 add num_rows_per_page in /rows and /search responses: also rename num_total_rows in num_rows_total BREAKING CHANGE: 🧨 field num_total_rows in /rows and /search has been renamed num_rows_total fixes #1687
closed
2023-08-15T22:52:17Z
2023-08-16T15:21:37Z
2023-08-16T15:20:57Z
severo
1,852,063,494
The /rows and /search responses should return the maximum number of rows per page
To be auto-sufficient, the response of /rows and /search should return the (maximum) number of rows per page. For now, we hardcode 100 in the client. It could be `max_rows_per_page` (and rename `num_total_rows` to `total_rows`?)
The /rows and /search responses should return the maximum number of rows per page: To be auto-sufficient, the response of /rows and /search should return the (maximum) number of rows per page. For now, we hardcode 100 in the client. It could be `max_rows_per_page` (and rename `num_total_rows` to `total_rows`?)
closed
2023-08-15T20:15:20Z
2023-08-16T15:20:58Z
2023-08-16T15:20:58Z
severo
1,851,641,126
Enable duckdb index on gated datasets
> Currently, duckdb index is not supported for gated/private datasets. I opened a question in duckdb foundations but didn't receive a response yet, I think I will open an issue in the repo https://github.com/duckdb/foundation-discussions/discussions/16 from [Slack](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1692050576450149?thread_ts=1692049509.952679&cid=C04L6P8KNQ5) (internal)
Enable duckdb index on gated datasets: > Currently, duckdb index is not supported for gated/private datasets. I opened a question in duckdb foundations but didn't receive a response yet, I think I will open an issue in the repo https://github.com/duckdb/foundation-discussions/discussions/16 from [Slack](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1692050576450149?thread_ts=1692049509.952679&cid=C04L6P8KNQ5) (internal)
closed
2023-08-15T15:21:46Z
2023-08-23T13:52:42Z
2023-08-23T13:52:42Z
severo
1,851,143,583
Rename parquet files
from `{config}/{dataset_name}-{split}{sharded_suffix}.parquet` to `{config}/{split}/{shard_idx:04d}.parquet`
Rename parquet files: from `{config}/{dataset_name}-{split}{sharded_suffix}.parquet` to `{config}/{split}/{shard_idx:04d}.parquet`
closed
2023-08-15T09:22:54Z
2023-08-18T14:23:10Z
2023-08-16T15:35:22Z
lhoestq
1,850,677,026
Incremental queue metrics
null
Incremental queue metrics:
closed
2023-08-14T23:01:42Z
2023-08-15T20:14:20Z
2023-08-15T20:14:18Z
AndreaFrancis
1,850,552,533
Set Access-Control-Allow-Origin to huggingface.co when a cookie is used for authentication
See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin Currently, we always set `Access-Control-Allow-Origin: *`. It's wrong. When a request passes a cookie, and when the user is authorized to get access thanks to that cookie, we should return: `Access-Control-Allow-Origin: huggingface.co`. The browser that receives the response will generate a network error (https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSNotSupportingCredentials). Fixing this is required to make calls from the browser, for the search feature.
Set Access-Control-Allow-Origin to huggingface.co when a cookie is used for authentication: See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin Currently, we always set `Access-Control-Allow-Origin: *`. It's wrong. When a request passes a cookie, and when the user is authorized to get access thanks to that cookie, we should return: `Access-Control-Allow-Origin: huggingface.co`. The browser that receives the response will generate a network error (https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSNotSupportingCredentials). Fixing this is required to make calls from the browser, for the search feature.
closed
2023-08-14T21:16:13Z
2023-09-15T07:51:46Z
2023-09-15T07:51:45Z
severo
1,850,304,398
feat: 🎸 be more specific in OpenAPI type
the failed configs format is a CustomError
feat: 🎸 be more specific in OpenAPI type: the failed configs format is a CustomError
closed
2023-08-14T18:16:33Z
2023-08-14T18:51:25Z
2023-08-14T18:51:00Z
severo
1,850,106,500
fix: 🐛 fix the optional types in OpenAPI
I made these changes by looking at how the Typescript types are created with https://github.com/oazapfts/oazapfts (that we use on the Hub).
fix: 🐛 fix the optional types in OpenAPI: I made these changes by looking at how the Typescript types are created with https://github.com/oazapfts/oazapfts (that we use on the Hub).
closed
2023-08-14T16:13:57Z
2023-08-14T16:56:49Z
2023-08-14T16:56:18Z
severo
1,850,097,086
Fix parquet filename regex
for https://huggingface.co/datasets/GalaktischeGurke/full_dataset_1509_lines_invoice_contract_mail_GPT3.5_test/discussions/1#64da13ff3a7ab21ea7c45e63
Fix parquet filename regex: for https://huggingface.co/datasets/GalaktischeGurke/full_dataset_1509_lines_invoice_contract_mail_GPT3.5_test/discussions/1#64da13ff3a7ab21ea7c45e63
closed
2023-08-14T16:07:27Z
2023-08-14T22:31:47Z
2023-08-14T22:31:47Z
lhoestq
1,849,829,703
Some refactors
fixes review comments from https://github.com/huggingface/datasets-server/pull/1674. Thanks @AndreaFrancis!
Some refactors: fixes review comments from https://github.com/huggingface/datasets-server/pull/1674. Thanks @AndreaFrancis!
closed
2023-08-14T13:57:35Z
2023-08-14T16:56:34Z
2023-08-14T16:56:32Z
severo
1,849,658,133
Fix disk metrics
null
Fix disk metrics:
closed
2023-08-14T12:13:11Z
2023-08-14T13:56:17Z
2023-08-14T13:56:16Z
AndreaFrancis
1,847,399,369
fix: 🐛 fix vulnerability in gitpython
null
fix: 🐛 fix vulnerability in gitpython:
closed
2023-08-11T20:37:26Z
2023-08-11T20:42:27Z
2023-08-11T20:42:26Z
severo
1,847,397,826
build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /libs/libcommon
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p> <blockquote> <h2>v3.1.32 - with another security update</h2> <h2>What's Changed</h2> <ul> <li>Bump cygwin/cygwin-install-action from 3 to 4 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1572">gitpython-developers/GitPython#1572</a></li> <li>Fix up the commit trailers functionality by <a href="https://github.com/itsluketwist"><code>@​itsluketwist</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1576">gitpython-developers/GitPython#1576</a></li> <li>Name top-level exceptions as private variables by <a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li>fix pypi long description by <a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li>Don't rely on <strong>del</strong> by <a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li>Block insecure non-multi options in clone/clone_from by <a href="https://github.com/Beuc"><code>@​Beuc</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li><a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li><a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li><a href="https://github.com/Beuc"><code>@​Beuc</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5d45ce243a12669724e969442e6725a894e30fd4"><code>5d45ce2</code></a> prepare 3.1.32 release</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/ca965ecc81853bca7675261729143f54e5bf4cdd"><code>ca965ec</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1609">#1609</a> from Beuc/block-insecure-options-clone-non-multi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5c59e0d63da6180db8a0b349f0ad36fef42aceed"><code>5c59e0d</code></a> Block insecure non-multi options in clone/clone_from</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/c09a71e2caefd5c25195b0b2decc8177d658216a"><code>c09a71e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1606">#1606</a> from r-darwish/no-del</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/a3859ee6f72e604d46a63dcd9fa3098adcc35cb0"><code>a3859ee</code></a> fixes</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/8186159af1a35c57829d86dd9a5a8c4f472f4637"><code>8186159</code></a> Don't rely on <strong>del</strong></li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/741edb54300fb4eb172e85e8ea0f07b4bd39bcc0"><code>741edb5</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1603">#1603</a> from eUgEntOptIc44/eugenoptic44-fix-pypi-long-descri...</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/0c543cd0ddedeaee27ca5e7c4c22b25a8fd5becb"><code>0c543cd</code></a> Improve readability of README.md</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/9cd7ddb96022dd30cfe7b64378e3b32a3747c1dd"><code>9cd7ddb</code></a> Improve the 'long_description' displayed on pypi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/6fc11e6e36e524a6749e15046eca3a8601745822"><code>6fc11e6</code></a> update README to reflect the status quo on <code>git</code> command usage</li> <li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gitpython&package-manager=pip&previous-version=3.1.31&new-version=3.1.32)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /libs/libcommon: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p> <blockquote> <h2>v3.1.32 - with another security update</h2> <h2>What's Changed</h2> <ul> <li>Bump cygwin/cygwin-install-action from 3 to 4 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1572">gitpython-developers/GitPython#1572</a></li> <li>Fix up the commit trailers functionality by <a href="https://github.com/itsluketwist"><code>@​itsluketwist</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1576">gitpython-developers/GitPython#1576</a></li> <li>Name top-level exceptions as private variables by <a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li>fix pypi long description by <a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li>Don't rely on <strong>del</strong> by <a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li>Block insecure non-multi options in clone/clone_from by <a href="https://github.com/Beuc"><code>@​Beuc</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li><a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li><a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li><a href="https://github.com/Beuc"><code>@​Beuc</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5d45ce243a12669724e969442e6725a894e30fd4"><code>5d45ce2</code></a> prepare 3.1.32 release</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/ca965ecc81853bca7675261729143f54e5bf4cdd"><code>ca965ec</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1609">#1609</a> from Beuc/block-insecure-options-clone-non-multi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5c59e0d63da6180db8a0b349f0ad36fef42aceed"><code>5c59e0d</code></a> Block insecure non-multi options in clone/clone_from</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/c09a71e2caefd5c25195b0b2decc8177d658216a"><code>c09a71e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1606">#1606</a> from r-darwish/no-del</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/a3859ee6f72e604d46a63dcd9fa3098adcc35cb0"><code>a3859ee</code></a> fixes</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/8186159af1a35c57829d86dd9a5a8c4f472f4637"><code>8186159</code></a> Don't rely on <strong>del</strong></li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/741edb54300fb4eb172e85e8ea0f07b4bd39bcc0"><code>741edb5</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1603">#1603</a> from eUgEntOptIc44/eugenoptic44-fix-pypi-long-descri...</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/0c543cd0ddedeaee27ca5e7c4c22b25a8fd5becb"><code>0c543cd</code></a> Improve readability of README.md</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/9cd7ddb96022dd30cfe7b64378e3b32a3747c1dd"><code>9cd7ddb</code></a> Improve the 'long_description' displayed on pypi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/6fc11e6e36e524a6749e15046eca3a8601745822"><code>6fc11e6</code></a> update README to reflect the status quo on <code>git</code> command usage</li> <li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gitpython&package-manager=pip&previous-version=3.1.31&new-version=3.1.32)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
closed
2023-08-11T20:35:58Z
2023-08-11T20:52:10Z
2023-08-11T20:52:00Z
dependabot[bot]
1,847,397,153
build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /e2e
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p> <blockquote> <h2>v3.1.32 - with another security update</h2> <h2>What's Changed</h2> <ul> <li>Bump cygwin/cygwin-install-action from 3 to 4 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1572">gitpython-developers/GitPython#1572</a></li> <li>Fix up the commit trailers functionality by <a href="https://github.com/itsluketwist"><code>@​itsluketwist</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1576">gitpython-developers/GitPython#1576</a></li> <li>Name top-level exceptions as private variables by <a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li>fix pypi long description by <a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li>Don't rely on <strong>del</strong> by <a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li>Block insecure non-multi options in clone/clone_from by <a href="https://github.com/Beuc"><code>@​Beuc</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li><a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li><a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li><a href="https://github.com/Beuc"><code>@​Beuc</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5d45ce243a12669724e969442e6725a894e30fd4"><code>5d45ce2</code></a> prepare 3.1.32 release</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/ca965ecc81853bca7675261729143f54e5bf4cdd"><code>ca965ec</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1609">#1609</a> from Beuc/block-insecure-options-clone-non-multi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5c59e0d63da6180db8a0b349f0ad36fef42aceed"><code>5c59e0d</code></a> Block insecure non-multi options in clone/clone_from</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/c09a71e2caefd5c25195b0b2decc8177d658216a"><code>c09a71e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1606">#1606</a> from r-darwish/no-del</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/a3859ee6f72e604d46a63dcd9fa3098adcc35cb0"><code>a3859ee</code></a> fixes</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/8186159af1a35c57829d86dd9a5a8c4f472f4637"><code>8186159</code></a> Don't rely on <strong>del</strong></li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/741edb54300fb4eb172e85e8ea0f07b4bd39bcc0"><code>741edb5</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1603">#1603</a> from eUgEntOptIc44/eugenoptic44-fix-pypi-long-descri...</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/0c543cd0ddedeaee27ca5e7c4c22b25a8fd5becb"><code>0c543cd</code></a> Improve readability of README.md</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/9cd7ddb96022dd30cfe7b64378e3b32a3747c1dd"><code>9cd7ddb</code></a> Improve the 'long_description' displayed on pypi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/6fc11e6e36e524a6749e15046eca3a8601745822"><code>6fc11e6</code></a> update README to reflect the status quo on <code>git</code> command usage</li> <li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gitpython&package-manager=pip&previous-version=3.1.31&new-version=3.1.32)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /e2e: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p> <blockquote> <h2>v3.1.32 - with another security update</h2> <h2>What's Changed</h2> <ul> <li>Bump cygwin/cygwin-install-action from 3 to 4 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1572">gitpython-developers/GitPython#1572</a></li> <li>Fix up the commit trailers functionality by <a href="https://github.com/itsluketwist"><code>@​itsluketwist</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1576">gitpython-developers/GitPython#1576</a></li> <li>Name top-level exceptions as private variables by <a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li>fix pypi long description by <a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li>Don't rely on <strong>del</strong> by <a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li>Block insecure non-multi options in clone/clone_from by <a href="https://github.com/Beuc"><code>@​Beuc</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/Hawk777"><code>@​Hawk777</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1590">gitpython-developers/GitPython#1590</a></li> <li><a href="https://github.com/eUgEntOptIc44"><code>@​eUgEntOptIc44</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1603">gitpython-developers/GitPython#1603</a></li> <li><a href="https://github.com/r-darwish"><code>@​r-darwish</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1606">gitpython-developers/GitPython#1606</a></li> <li><a href="https://github.com/Beuc"><code>@​Beuc</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1609">gitpython-developers/GitPython#1609</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5d45ce243a12669724e969442e6725a894e30fd4"><code>5d45ce2</code></a> prepare 3.1.32 release</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/ca965ecc81853bca7675261729143f54e5bf4cdd"><code>ca965ec</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1609">#1609</a> from Beuc/block-insecure-options-clone-non-multi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5c59e0d63da6180db8a0b349f0ad36fef42aceed"><code>5c59e0d</code></a> Block insecure non-multi options in clone/clone_from</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/c09a71e2caefd5c25195b0b2decc8177d658216a"><code>c09a71e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1606">#1606</a> from r-darwish/no-del</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/a3859ee6f72e604d46a63dcd9fa3098adcc35cb0"><code>a3859ee</code></a> fixes</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/8186159af1a35c57829d86dd9a5a8c4f472f4637"><code>8186159</code></a> Don't rely on <strong>del</strong></li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/741edb54300fb4eb172e85e8ea0f07b4bd39bcc0"><code>741edb5</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1603">#1603</a> from eUgEntOptIc44/eugenoptic44-fix-pypi-long-descri...</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/0c543cd0ddedeaee27ca5e7c4c22b25a8fd5becb"><code>0c543cd</code></a> Improve readability of README.md</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/9cd7ddb96022dd30cfe7b64378e3b32a3747c1dd"><code>9cd7ddb</code></a> Improve the 'long_description' displayed on pypi</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/6fc11e6e36e524a6749e15046eca3a8601745822"><code>6fc11e6</code></a> update README to reflect the status quo on <code>git</code> command usage</li> <li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gitpython&package-manager=pip&previous-version=3.1.31&new-version=3.1.32)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts). </details>
closed
2023-08-11T20:35:17Z
2023-08-11T20:52:08Z
2023-08-11T20:52:05Z
dependabot[bot]
1,847,385,841
feat: 🎸 add metrics for all the volumes
fixes #1561
feat: 🎸 add metrics for all the volumes: fixes #1561
closed
2023-08-11T20:25:41Z
2023-08-14T13:43:29Z
2023-08-11T20:51:44Z
severo
1,847,325,777
Set cache metrics cron schedule to default value
It will be `schedule: "13 00 * * *"` as default in values.yaml
Set cache metrics cron schedule to default value: It will be `schedule: "13 00 * * *"` as default in values.yaml
closed
2023-08-11T19:32:42Z
2023-08-11T19:33:39Z
2023-08-11T19:33:38Z
AndreaFrancis
1,847,223,883
feat: 🎸 move openapi.json to the docs
note that we cannot serve from the deployed docs (see https://github.com/huggingface/doc-builder/issues/312#issuecomment-1675099444). We thus redirect to github. Also: we fix the github action that checks openapi (it was written opanapi)
feat: 🎸 move openapi.json to the docs: note that we cannot serve from the deployed docs (see https://github.com/huggingface/doc-builder/issues/312#issuecomment-1675099444). We thus redirect to github. Also: we fix the github action that checks openapi (it was written opanapi)
closed
2023-08-11T18:03:30Z
2023-08-11T18:28:58Z
2023-08-11T18:18:05Z
severo
1,847,109,250
Document all the X-Error-Code in OpenAPI
and also in the docs, maybe, ie: a page with all the error types. related to #1670 (I think we first want to generate the OpenAPI spec automatically, before documenting all the error codes)
Document all the X-Error-Code in OpenAPI: and also in the docs, maybe, ie: a page with all the error types. related to #1670 (I think we first want to generate the OpenAPI spec automatically, before documenting all the error codes)
open
2023-08-11T16:26:42Z
2023-08-11T16:26:50Z
null
severo
1,847,108,089
Generate OpenAPI specification from the code
It would help to: - ensure the OpenAPI is always up to date - reduce the maintenance burden - allow contract testing
Generate OpenAPI specification from the code: It would help to: - ensure the OpenAPI is always up to date - reduce the maintenance burden - allow contract testing
open
2023-08-11T16:25:52Z
2023-08-11T16:28:36Z
null
severo
1,847,097,928
Adding StreamingRowsError to backfill
Temporary adding StreamingRowsError to error_codes_to_retry in order to backfill datasets from https://github.com/huggingface/datasets-server/issues/1550
Adding StreamingRowsError to backfill: Temporary adding StreamingRowsError to error_codes_to_retry in order to backfill datasets from https://github.com/huggingface/datasets-server/issues/1550
closed
2023-08-11T16:17:31Z
2023-08-11T16:38:58Z
2023-08-11T16:38:58Z
AndreaFrancis
1,846,715,345
Delete empty folders from downloaded duckdb indexes in /search
See comment https://github.com/huggingface/datasets-server/pull/1536#discussion_r1283612238 Currently, the job deletes expired files (more than 3 days in prod) but if the folders remain empty those will continue existing, we should remove them.
Delete empty folders from downloaded duckdb indexes in /search: See comment https://github.com/huggingface/datasets-server/pull/1536#discussion_r1283612238 Currently, the job deletes expired files (more than 3 days in prod) but if the folders remain empty those will continue existing, we should remove them.
closed
2023-08-11T12:10:29Z
2023-11-07T13:46:44Z
2023-11-07T13:46:43Z
AndreaFrancis
1,846,009,794
fix: 🐛 update the OpenAPI spec
Missing: - [x] ensure we documented all the status codes: missing: <strike>400 (BAD_REQUEST)</strike> (removed 400 from the code with [d4ee7a5](https://github.com/huggingface/datasets-server/pull/1667/commits/d4ee7a5bd32b9c1666e0f1293c8c0292a265e133)), 501 (NOT_IMPLEMENTED) - [x] ensure the OpenAPI spec is correct (I have some issues in stoplight, some parts are not rendered). It works well in https://redocly.github.io/redoc/?url=https://github.com/huggingface/datasets-server/raw/0e3a700bb3716aad9d90897e844c9d40455a3b6f/chart/static-files/openapi.json#operation/getOptInOutUrls - [x] move openapi outside of the chart? its size generates issues in the CI, and anyway it does not pertain the chart. See #849. - [x] validate the openapi spec in the CI <strike>with spectral</strike> -> #446 - [x] fix the e2e test that use openapi (contract testing) In other PRs: - document all the X-Error-Code in OpenAPI -> #1671 - do contract testing on the OpenAPI examples. Maybe use a new field, like `x-contract-testing-url`, to get the URL to test programmatically. Also: create permanent test datasets (like severo/private) in prod that can be used to contract test in prod? -> #518 - generate OpenAPI spec from the code -> #1670 - test authenticated against authorized, and maybe add examples in the openapi spec -> in #1656
fix: 🐛 update the OpenAPI spec: Missing: - [x] ensure we documented all the status codes: missing: <strike>400 (BAD_REQUEST)</strike> (removed 400 from the code with [d4ee7a5](https://github.com/huggingface/datasets-server/pull/1667/commits/d4ee7a5bd32b9c1666e0f1293c8c0292a265e133)), 501 (NOT_IMPLEMENTED) - [x] ensure the OpenAPI spec is correct (I have some issues in stoplight, some parts are not rendered). It works well in https://redocly.github.io/redoc/?url=https://github.com/huggingface/datasets-server/raw/0e3a700bb3716aad9d90897e844c9d40455a3b6f/chart/static-files/openapi.json#operation/getOptInOutUrls - [x] move openapi outside of the chart? its size generates issues in the CI, and anyway it does not pertain the chart. See #849. - [x] validate the openapi spec in the CI <strike>with spectral</strike> -> #446 - [x] fix the e2e test that use openapi (contract testing) In other PRs: - document all the X-Error-Code in OpenAPI -> #1671 - do contract testing on the OpenAPI examples. Maybe use a new field, like `x-contract-testing-url`, to get the URL to test programmatically. Also: create permanent test datasets (like severo/private) in prod that can be used to contract test in prod? -> #518 - generate OpenAPI spec from the code -> #1670 - test authenticated against authorized, and maybe add examples in the openapi spec -> in #1656
closed
2023-08-10T23:19:07Z
2023-08-11T19:30:30Z
2023-08-11T19:29:58Z
severo