id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
1,783,215,209
Adding retry to create duckdb index commit
Should fix HfHubHTTPError when creating commit for duckdb-index Part of https://github.com/huggingface/datasets-server/issues/1462
Adding retry to create duckdb index commit: Should fix HfHubHTTPError when creating commit for duckdb-index Part of https://github.com/huggingface/datasets-server/issues/1462
closed
2023-06-30T22:16:13Z
2023-07-03T12:56:23Z
2023-07-03T12:56:22Z
AndreaFrancis
1,783,075,231
Ensure parquet shards are sorted
fixes #1397
Ensure parquet shards are sorted: fixes #1397
closed
2023-06-30T19:44:37Z
2023-06-30T20:02:40Z
2023-06-30T20:02:13Z
severo
1,782,967,650
Change the way we represent ResponseAlreadyComputedError in the cache
When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed). But it makes it hard to monitor the "true" errors. If we follow the analogy with the HTTP status codes, it should be 3xx instead of 5xx, ie: a redirection to another resource. I don't know how we should change this though. Let's put ideas in the issue.
Change the way we represent ResponseAlreadyComputedError in the cache: When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed). But it makes it hard to monitor the "true" errors. If we follow the analogy with the HTTP status codes, it should be 3xx instead of 5xx, ie: a redirection to another resource. I don't know how we should change this though. Let's put ideas in the issue.
closed
2023-06-30T18:13:34Z
2024-02-23T09:56:05Z
2024-02-23T09:56:04Z
severo
1,782,821,542
Change the structure of parquet files
The parquet files will be stored in the `refs/convert/parquet`"branch" with the following structure: ``` [config]/[split]/[shard index: 0000 to 9999].parquet ``` Note that the "partially" converted datasets will use the following (See https://github.com/huggingface/datasets-server/pull/1448): ``` [config]/[split]/partial/[shard index: 0000 to 9999].parquet ```
Change the structure of parquet files: The parquet files will be stored in the `refs/convert/parquet`"branch" with the following structure: ``` [config]/[split]/[shard index: 0000 to 9999].parquet ``` Note that the "partially" converted datasets will use the following (See https://github.com/huggingface/datasets-server/pull/1448): ``` [config]/[split]/partial/[shard index: 0000 to 9999].parquet ```
closed
2023-06-30T16:33:12Z
2023-08-17T20:41:16Z
2023-08-17T20:41:16Z
severo
1,782,478,740
split-duckdb-index many UnexpectedError in error_code
Updated query (Without errors from parent): ``` db.cachedResponsesBlue.aggregate([{$match: {error_code: "UnexpectedError", kind:"split-duckdb-index", "details.copied_from_artifact":{$exists:false}}},{$group: {_id: {cause: "$details.cause_exception"}, count: {$sum: 1}}},{$sort: {count: -1}}]) ``` From 128617 records currently existing in cache collection, these are the top kind of UnexpectedErrors: ``` [ { _id: { cause: 'HfHubHTTPError' }, count: 4429 }, { _id: { cause: 'HTTPException' }, count: 2570 }, { _id: { cause: 'Error' }, count: 54 }, { _id: { cause: 'BinderException' }, count: 41 }, { _id: { cause: 'CatalogException' }, count: 38 }, { _id: { cause: 'ParserException' }, count: 29 }, { _id: { cause: 'InvalidInputException' }, count: 22 }, { _id: { cause: 'RuntimeError' }, count: 8 }, { _id: { cause: 'IOException' }, count: 5 }, { _id: { cause: 'BadRequestError' }, count: 2 }, { _id: { cause: 'NotPrimaryError' }, count: 2 }, { _id: { cause: 'EntryNotFoundError' }, count: 2 } ] ``` Since this is a new job runner, most of these should be evaluated in case there is a bug in the code.
split-duckdb-index many UnexpectedError in error_code: Updated query (Without errors from parent): ``` db.cachedResponsesBlue.aggregate([{$match: {error_code: "UnexpectedError", kind:"split-duckdb-index", "details.copied_from_artifact":{$exists:false}}},{$group: {_id: {cause: "$details.cause_exception"}, count: {$sum: 1}}},{$sort: {count: -1}}]) ``` From 128617 records currently existing in cache collection, these are the top kind of UnexpectedErrors: ``` [ { _id: { cause: 'HfHubHTTPError' }, count: 4429 }, { _id: { cause: 'HTTPException' }, count: 2570 }, { _id: { cause: 'Error' }, count: 54 }, { _id: { cause: 'BinderException' }, count: 41 }, { _id: { cause: 'CatalogException' }, count: 38 }, { _id: { cause: 'ParserException' }, count: 29 }, { _id: { cause: 'InvalidInputException' }, count: 22 }, { _id: { cause: 'RuntimeError' }, count: 8 }, { _id: { cause: 'IOException' }, count: 5 }, { _id: { cause: 'BadRequestError' }, count: 2 }, { _id: { cause: 'NotPrimaryError' }, count: 2 }, { _id: { cause: 'EntryNotFoundError' }, count: 2 } ] ``` Since this is a new job runner, most of these should be evaluated in case there is a bug in the code.
closed
2023-06-30T12:52:15Z
2023-08-11T15:44:16Z
2023-08-11T15:44:16Z
AndreaFrancis
1,781,341,194
Disable backfill k8s Job
null
Disable backfill k8s Job:
closed
2023-06-29T19:04:42Z
2023-06-29T19:06:03Z
2023-06-29T19:06:02Z
AndreaFrancis
1,781,072,941
feat: 🎸 backfill the datasets
because dataset-is-valid step version has been increased. Using this to also fix possible issues (see https://github.com/huggingface/datasets-server/pull/1345 we had put on hold waiting for #1346, that has since be fixed)
feat: 🎸 backfill the datasets: because dataset-is-valid step version has been increased. Using this to also fix possible issues (see https://github.com/huggingface/datasets-server/pull/1345 we had put on hold waiting for #1346, that has since be fixed)
closed
2023-06-29T15:30:58Z
2023-06-29T15:31:05Z
2023-06-29T15:31:04Z
severo
1,780,946,502
fix: split-duckdb-index error when indexing columns with spaces
After deploying split-duckdb-index, there are some errors because of column names with spaces like: ``` duckdb.BinderException: Binder Error: Referenced column "Mean" not found in FROM clause! Candidate bindings: "read_parquet.Model" LINE 1: ...f_index_id, Ranking,User,Model,Results,**Mean Reward**,Std Reward FROM read_parque... ^ ``` This PR fixes this error.
fix: split-duckdb-index error when indexing columns with spaces: After deploying split-duckdb-index, there are some errors because of column names with spaces like: ``` duckdb.BinderException: Binder Error: Referenced column "Mean" not found in FROM clause! Candidate bindings: "read_parquet.Model" LINE 1: ...f_index_id, Ranking,User,Model,Results,**Mean Reward**,Std Reward FROM read_parque... ^ ``` This PR fixes this error.
closed
2023-06-29T14:25:59Z
2023-06-29T15:59:00Z
2023-06-29T15:54:39Z
AndreaFrancis
1,780,313,353
Improve metrics to hide duplicates
Yesterday, a new step (`split-duckcb-index`) was added and run over all the datasets. Here are the metrics: <img width="771" alt="Capture d’écran 2023-06-29 à 09 46 31" src="https://github.com/huggingface/datasets-server/assets/1676121/5daf2798-c79b-4ff9-a192-03ba38d4b149"> We can see that many new cache entries have been filled and that many of them are errors. But looking into the details, we can see that most of them are copies of the error in a previous step. It would be useful to show the errors related only to the new step.
Improve metrics to hide duplicates: Yesterday, a new step (`split-duckcb-index`) was added and run over all the datasets. Here are the metrics: <img width="771" alt="Capture d’écran 2023-06-29 à 09 46 31" src="https://github.com/huggingface/datasets-server/assets/1676121/5daf2798-c79b-4ff9-a192-03ba38d4b149"> We can see that many new cache entries have been filled and that many of them are errors. But looking into the details, we can see that most of them are copies of the error in a previous step. It would be useful to show the errors related only to the new step.
closed
2023-06-29T07:49:28Z
2024-02-06T14:48:11Z
2024-02-06T14:48:11Z
severo
1,780,284,502
feat: 🎸 reduce the number of workers back to 20
null
feat: 🎸 reduce the number of workers back to 20:
closed
2023-06-29T07:27:34Z
2023-06-29T07:28:07Z
2023-06-29T07:27:39Z
severo
1,779,744,022
Disable backfill - ACTION = skip
null
Disable backfill - ACTION = skip:
closed
2023-06-28T20:41:20Z
2023-06-28T20:42:36Z
2023-06-28T20:42:34Z
AndreaFrancis
1,779,350,448
Update quality target in Makefile for /chart
The path to staging environment in the Makefile was outdated (the name was changed from `dev` to `staging`)
Update quality target in Makefile for /chart: The path to staging environment in the Makefile was outdated (the name was changed from `dev` to `staging`)
closed
2023-06-28T17:00:24Z
2023-06-28T18:33:54Z
2023-06-28T18:33:53Z
polinaeterna
1,779,215,015
Enable backfill one time
After this PR is merged and deployed, I will rollback to action=skip to disable the k8s job.
Enable backfill one time: After this PR is merged and deployed, I will rollback to action=skip to disable the k8s job.
closed
2023-06-28T15:36:22Z
2023-06-28T15:53:08Z
2023-06-28T15:53:07Z
AndreaFrancis
1,779,204,041
Temporaly increase resources for new job runner split-duckdb-index
null
Temporaly increase resources for new job runner split-duckdb-index:
closed
2023-06-28T15:29:19Z
2023-06-28T15:31:06Z
2023-06-28T15:31:05Z
AndreaFrancis
1,779,167,702
Replace valid with preview and viewer
replaces #1450 and #1447 fixes #1446 and #1445 - [x] remove `valid` field from `/valid` endpoint - [x] replace `valid` with `viewer` and `preview` in `/is-valid` - [x] update the docs - [x] update openapi, <strike>rapidapi</strike> (I don't really understand rapidapi anymore), postman This change is breaking the API for /valid and /is-valid. See https://moon-ci-docs.huggingface.co/docs/datasets-server/pr_1452/en/valid. - [x] for Autotrain, I think the endpoints are not used in the code (https://github.com/huggingface/autotrain-ui and https://github.com/huggingface/autotrain-backend/). cc @abhishekkrthakur - [x] for model-evaluator, I opened https://github.com/huggingface/model-evaluator/pull/71 cc @lewtun
Replace valid with preview and viewer: replaces #1450 and #1447 fixes #1446 and #1445 - [x] remove `valid` field from `/valid` endpoint - [x] replace `valid` with `viewer` and `preview` in `/is-valid` - [x] update the docs - [x] update openapi, <strike>rapidapi</strike> (I don't really understand rapidapi anymore), postman This change is breaking the API for /valid and /is-valid. See https://moon-ci-docs.huggingface.co/docs/datasets-server/pr_1452/en/valid. - [x] for Autotrain, I think the endpoints are not used in the code (https://github.com/huggingface/autotrain-ui and https://github.com/huggingface/autotrain-backend/). cc @abhishekkrthakur - [x] for model-evaluator, I opened https://github.com/huggingface/model-evaluator/pull/71 cc @lewtun
closed
2023-06-28T15:09:37Z
2023-06-29T14:13:46Z
2023-06-29T14:13:14Z
severo
1,779,124,224
Change duckdb commiter key
null
Change duckdb commiter key:
closed
2023-06-28T14:47:37Z
2023-06-28T14:50:37Z
2023-06-28T14:50:36Z
AndreaFrancis
1,779,109,875
Add preview and viewer to is valid
null
Add preview and viewer to is valid:
closed
2023-06-28T14:41:05Z
2023-06-28T15:10:19Z
2023-06-28T15:09:50Z
severo
1,779,028,625
Adding debug logs for split-duckdb-index
When processing split-duckdb-index in staging env, it is showing this message: ``` DEBUG: 2023-06-28 13:42:51,345 - root - The dataset does not exist on the Hub. DEBUG: 2023-06-28 13:42:51,349 - root - Directory removed: /duckdb-index/21626898975922-split-duckdb-index-asoria-sample_glue-84f50613 DEBUG: 2023-06-28 13:42:51,349 - root - [split-duckdb-index] the dataset=asoria/sample_glue could not be found, don't update the cache DEBUG: 2023-06-28 13:42:51,350 - root - [split-duckdb-index] job output with ERROR - JobManager(job_id=649c38d766b00b1fb05b8e4e dataset=asoria/sample_glue job_info={'job_id': '649c38d766b00b1fb05b8e4e', 'type': 'split-duckdb-index', 'params': {'dataset': 'asoria/sample_glue', 'revision': 'cf2d2d9273f1d361831baafab1f29eeb95a6af56', 'config': 'default', 'split': 'test'}, 'priority': <Priority.NORMAL: 'normal'>} ``` Looks like some hub operation does not find the dataset and it is not possible to store the cache result. Adding a couple of logs for debug.
Adding debug logs for split-duckdb-index: When processing split-duckdb-index in staging env, it is showing this message: ``` DEBUG: 2023-06-28 13:42:51,345 - root - The dataset does not exist on the Hub. DEBUG: 2023-06-28 13:42:51,349 - root - Directory removed: /duckdb-index/21626898975922-split-duckdb-index-asoria-sample_glue-84f50613 DEBUG: 2023-06-28 13:42:51,349 - root - [split-duckdb-index] the dataset=asoria/sample_glue could not be found, don't update the cache DEBUG: 2023-06-28 13:42:51,350 - root - [split-duckdb-index] job output with ERROR - JobManager(job_id=649c38d766b00b1fb05b8e4e dataset=asoria/sample_glue job_info={'job_id': '649c38d766b00b1fb05b8e4e', 'type': 'split-duckdb-index', 'params': {'dataset': 'asoria/sample_glue', 'revision': 'cf2d2d9273f1d361831baafab1f29eeb95a6af56', 'config': 'default', 'split': 'test'}, 'priority': <Priority.NORMAL: 'normal'>} ``` Looks like some hub operation does not find the dataset and it is not possible to store the cache result. Adding a couple of logs for debug.
closed
2023-06-28T14:04:54Z
2023-06-28T14:18:18Z
2023-06-28T14:18:17Z
AndreaFrancis
1,778,626,219
Stream convert to parquet
Allow to have datasets partially converted to parquet, like c4, refinedweb, oscar, etc. Datasets above 5GB are streamed to generate 5GB (uncompressed) of parquet files. ## Implementation details I implemented a context manager `limite_parquet_writes` that does some monkeypatching in the `datasets` lib to stop the dataset generation at the right time. This is not the kind of features that would be implemented in the `datasets` library natively, so I'm convinced this is the easiest way to do it. I added a test for this helper to make sure it works as expected and keeps working in the future. I'm also using the new path scheme `config/split/partial/SSSS.parquet`. I didn't change the scheme for regular parquet conversion though - it can be done later. close https://github.com/huggingface/datasets-server/issues/1257
Stream convert to parquet: Allow to have datasets partially converted to parquet, like c4, refinedweb, oscar, etc. Datasets above 5GB are streamed to generate 5GB (uncompressed) of parquet files. ## Implementation details I implemented a context manager `limite_parquet_writes` that does some monkeypatching in the `datasets` lib to stop the dataset generation at the right time. This is not the kind of features that would be implemented in the `datasets` library natively, so I'm convinced this is the easiest way to do it. I added a test for this helper to make sure it works as expected and keeps working in the future. I'm also using the new path scheme `config/split/partial/SSSS.parquet`. I didn't change the scheme for regular parquet conversion though - it can be done later. close https://github.com/huggingface/datasets-server/issues/1257
closed
2023-06-28T10:06:31Z
2023-07-03T15:42:26Z
2023-07-03T15:40:32Z
lhoestq
1,778,601,816
docs: ✏️ add docs for fields viewer and preview in /valid
null
docs: ✏️ add docs for fields viewer and preview in /valid:
closed
2023-06-28T09:52:14Z
2023-06-28T15:10:41Z
2023-06-28T15:10:09Z
severo
1,778,545,555
Add fields `viewer` and `preview` to /is-valid
For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid. We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code and also in the @lewtun's evaluator if I remember correctly.
Add fields `viewer` and `preview` to /is-valid: For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid. We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code and also in the @lewtun's evaluator if I remember correctly.
closed
2023-06-28T09:19:56Z
2023-06-29T14:13:16Z
2023-06-29T14:13:16Z
severo
1,778,541,141
Remove `.valid` from `/valid` endpoint?
We recently added to fields to `/valid`: - `viewer`: all the datasets that have a valid dataset viewer - `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets. Should we remove it, as it doubles the size of the response and increases the response time, with no benefit? cc @huggingface/datasets-server Note that it's used in the notebooks (https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code), for example, so it is a breaking change. I would vote in favor of removing it, and updating the notebooks (and the docs obviously).
Remove `.valid` from `/valid` endpoint?: We recently added to fields to `/valid`: - `viewer`: all the datasets that have a valid dataset viewer - `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets. Should we remove it, as it doubles the size of the response and increases the response time, with no benefit? cc @huggingface/datasets-server Note that it's used in the notebooks (https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code), for example, so it is a breaking change. I would vote in favor of removing it, and updating the notebooks (and the docs obviously).
closed
2023-06-28T09:17:13Z
2023-07-26T15:47:35Z
2023-07-26T15:47:35Z
severo
1,778,524,649
Remove the useless mongodb indexes
Review the current indexes in the mongodb collections, and ensure all of them are required. Else, remove the redundant ones. It will allow us to reduce the size on the server
Remove the useless mongodb indexes: Review the current indexes in the mongodb collections, and ensure all of them are required. Else, remove the redundant ones. It will allow us to reduce the size on the server
closed
2023-06-28T09:07:19Z
2023-08-16T21:23:20Z
2023-08-16T21:23:20Z
severo
1,778,462,851
Raise specific errors (and error_code) instead of UnexpectedError
The following query on the production database gives the number of datasets with at least one cache entry with error_code "UnexpectedError", grouped by the underlying "cause_exception". For the most common ones (`DatasetGenerationError`, `HfHubHTTPError`, `OSError`, etc.) we would benefit from raising a specific error with its error code. It would allow to: - retry automatically, if needed - show an adequate error message to the users - even: adapt the way we show the dataset viewer on the Hub `null` means it has no `details.cause_exception`. These cache entries should be inspected more closely. See https://github.com/huggingface/datasets-server/issues/1123 in particular, which is one of the cases where no cause exception is reported. ``` db.cachedResponsesBlue.aggregate([ {$match: {error_code: "UnexpectedError"}}, {$group: {_id: {cause: "$details.cause_exception", dataset: "$dataset"}, count: {$sum: 1}}}, {$group: {_id: "$_id.cause", count: {$sum: 1}}}, {$sort: {count: -1}} ]) { _id: 'DatasetGenerationError', count: 1964 } { _id: null, count: 1388 } { _id: 'HfHubHTTPError', count: 1154 } { _id: 'OSError', count: 433 } { _id: 'FileNotFoundError', count: 242 } { _id: 'FileExistsError', count: 198 } { _id: 'ValueError', count: 186 } { _id: 'TypeError', count: 160 } { _id: 'ConnectionError', count: 146 } { _id: 'RuntimeError', count: 86 } { _id: 'NonMatchingSplitsSizesError', count: 83 } { _id: 'FileSystemError', count: 62 } { _id: 'ClientResponseError', count: 52 } { _id: 'ArrowInvalid', count: 45 } { _id: 'ParquetResponseEmptyError', count: 43 } { _id: 'RepositoryNotFoundError', count: 41 } { _id: 'ManualDownloadError', count: 39 } { _id: 'IndexError', count: 28 } { _id: 'AttributeError', count: 16 } { _id: 'KeyError', count: 15 } { _id: 'GatedRepoError', count: 13 } { _id: 'NotImplementedError', count: 11 } { _id: 'ExpectedMoreSplits', count: 9 } { _id: 'PermissionError', count: 8 } { _id: 'BadRequestError', count: 7 } { _id: 'NonMatchingChecksumError', count: 6 } { _id: 'AssertionError', count: 4 } { _id: 'NameError', count: 4 } { _id: 'UnboundLocalError', count: 3 } { _id: 'JSONDecodeError', count: 3 } { _id: 'ZeroDivisionError', count: 3 } { _id: 'InvalidDocument', count: 3 } { _id: 'DoesNotExist', count: 3 } { _id: 'EOFError', count: 3 } { _id: 'ImportError', count: 3 } { _id: 'NotADirectoryError', count: 2 } { _id: 'RarCannotExec', count: 2 } { _id: 'ReadTimeout', count: 2 } { _id: 'ChunkedEncodingError', count: 2 } { _id: 'ExpectedMoreDownloadedFiles', count: 2 } { _id: 'InvalidConfigName', count: 2 } { _id: 'ModuleNotFoundError', count: 2 } { _id: 'Exception', count: 2 } { _id: 'MissingBeamOptions', count: 2 } { _id: 'HTTPError', count: 1 } { _id: 'BadZipFile', count: 1 } { _id: 'OverflowError', count: 1 } { _id: 'HFValidationError', count: 1 } { _id: 'IsADirectoryError', count: 1 } { _id: 'OperationalError', count: 1 } ```
Raise specific errors (and error_code) instead of UnexpectedError: The following query on the production database gives the number of datasets with at least one cache entry with error_code "UnexpectedError", grouped by the underlying "cause_exception". For the most common ones (`DatasetGenerationError`, `HfHubHTTPError`, `OSError`, etc.) we would benefit from raising a specific error with its error code. It would allow to: - retry automatically, if needed - show an adequate error message to the users - even: adapt the way we show the dataset viewer on the Hub `null` means it has no `details.cause_exception`. These cache entries should be inspected more closely. See https://github.com/huggingface/datasets-server/issues/1123 in particular, which is one of the cases where no cause exception is reported. ``` db.cachedResponsesBlue.aggregate([ {$match: {error_code: "UnexpectedError"}}, {$group: {_id: {cause: "$details.cause_exception", dataset: "$dataset"}, count: {$sum: 1}}}, {$group: {_id: "$_id.cause", count: {$sum: 1}}}, {$sort: {count: -1}} ]) { _id: 'DatasetGenerationError', count: 1964 } { _id: null, count: 1388 } { _id: 'HfHubHTTPError', count: 1154 } { _id: 'OSError', count: 433 } { _id: 'FileNotFoundError', count: 242 } { _id: 'FileExistsError', count: 198 } { _id: 'ValueError', count: 186 } { _id: 'TypeError', count: 160 } { _id: 'ConnectionError', count: 146 } { _id: 'RuntimeError', count: 86 } { _id: 'NonMatchingSplitsSizesError', count: 83 } { _id: 'FileSystemError', count: 62 } { _id: 'ClientResponseError', count: 52 } { _id: 'ArrowInvalid', count: 45 } { _id: 'ParquetResponseEmptyError', count: 43 } { _id: 'RepositoryNotFoundError', count: 41 } { _id: 'ManualDownloadError', count: 39 } { _id: 'IndexError', count: 28 } { _id: 'AttributeError', count: 16 } { _id: 'KeyError', count: 15 } { _id: 'GatedRepoError', count: 13 } { _id: 'NotImplementedError', count: 11 } { _id: 'ExpectedMoreSplits', count: 9 } { _id: 'PermissionError', count: 8 } { _id: 'BadRequestError', count: 7 } { _id: 'NonMatchingChecksumError', count: 6 } { _id: 'AssertionError', count: 4 } { _id: 'NameError', count: 4 } { _id: 'UnboundLocalError', count: 3 } { _id: 'JSONDecodeError', count: 3 } { _id: 'ZeroDivisionError', count: 3 } { _id: 'InvalidDocument', count: 3 } { _id: 'DoesNotExist', count: 3 } { _id: 'EOFError', count: 3 } { _id: 'ImportError', count: 3 } { _id: 'NotADirectoryError', count: 2 } { _id: 'RarCannotExec', count: 2 } { _id: 'ReadTimeout', count: 2 } { _id: 'ChunkedEncodingError', count: 2 } { _id: 'ExpectedMoreDownloadedFiles', count: 2 } { _id: 'InvalidConfigName', count: 2 } { _id: 'ModuleNotFoundError', count: 2 } { _id: 'Exception', count: 2 } { _id: 'MissingBeamOptions', count: 2 } { _id: 'HTTPError', count: 1 } { _id: 'BadZipFile', count: 1 } { _id: 'OverflowError', count: 1 } { _id: 'HFValidationError', count: 1 } { _id: 'IsADirectoryError', count: 1 } { _id: 'OperationalError', count: 1 } ```
open
2023-06-28T08:28:06Z
2024-08-01T11:11:21Z
null
severo
1,778,427,146
Add new dependencies for the job runners
Based on statistics about the most needed dependencies (https://github.com/huggingface/datasets-server/issues/1281#issuecomment-1609455781), we should prioritize adding [ir-datasets](https://pypi.org/project/ir-datasets/), [bioc](https://pypi.org/project/bioc/) and [pytorch_ie](https://pypi.org/project/pytorch-ie/).
Add new dependencies for the job runners: Based on statistics about the most needed dependencies (https://github.com/huggingface/datasets-server/issues/1281#issuecomment-1609455781), we should prioritize adding [ir-datasets](https://pypi.org/project/ir-datasets/), [bioc](https://pypi.org/project/bioc/) and [pytorch_ie](https://pypi.org/project/pytorch-ie/).
closed
2023-06-28T08:04:49Z
2024-02-02T17:19:24Z
2024-02-02T17:19:23Z
severo
1,777,655,160
Adding other processing steps
Previously, `split-duckdb-index` was triggered only by `config-split-names-from-info` but when this step finished with error 500 because of ResponseAlreadyComputedError, `split-duckdb-index` never started. Adding other parents to avoid skipping job compute. Note that the issue with parallel processing steps will remain and should be fixed on https://github.com/huggingface/datasets-server/issues/1358
Adding other processing steps: Previously, `split-duckdb-index` was triggered only by `config-split-names-from-info` but when this step finished with error 500 because of ResponseAlreadyComputedError, `split-duckdb-index` never started. Adding other parents to avoid skipping job compute. Note that the issue with parallel processing steps will remain and should be fixed on https://github.com/huggingface/datasets-server/issues/1358
closed
2023-06-27T19:51:03Z
2023-06-30T09:04:08Z
2023-06-27T20:01:10Z
AndreaFrancis
1,777,404,581
Try to fix Duckdb extensions
Error when computing split-duckdb-index ``` "details": { "error": "IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".", "cause_exception": "IOException", "cause_message": "IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".", "cause_traceback": [ "Traceback (most recent call last):\n", " File \"/src/services/worker/src/worker/job_manager.py\", line 160, in process\n job_result = self.job_runner.compute()\n", " File \"/src/services/worker/src/worker/job_runners/split/duckdb_index.py\", line 256, in compute\n compute_index_rows(\n", " File \"/src/services/worker/src/worker/job_runners/split/duckdb_index.py\", line 148, in compute_index_rows\n con.sql(create_command_sql)\n", "duckdb.IOException: IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".\n" ] }, ```
Try to fix Duckdb extensions: Error when computing split-duckdb-index ``` "details": { "error": "IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".", "cause_exception": "IOException", "cause_message": "IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".", "cause_traceback": [ "Traceback (most recent call last):\n", " File \"/src/services/worker/src/worker/job_manager.py\", line 160, in process\n job_result = self.job_runner.compute()\n", " File \"/src/services/worker/src/worker/job_runners/split/duckdb_index.py\", line 256, in compute\n compute_index_rows(\n", " File \"/src/services/worker/src/worker/job_runners/split/duckdb_index.py\", line 148, in compute_index_rows\n con.sql(create_command_sql)\n", "duckdb.IOException: IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".\n" ] }, ```
closed
2023-06-27T17:08:56Z
2023-06-28T12:04:29Z
2023-06-27T19:12:46Z
AndreaFrancis
1,777,335,026
Set Duckdb extensions install directory
Error `duckdb.IOException: IO Error: Failed to create directory \"//.duckdb\"!\n ` is shown when computing duckdb index.
Set Duckdb extensions install directory: Error `duckdb.IOException: IO Error: Failed to create directory \"//.duckdb\"!\n ` is shown when computing duckdb index.
closed
2023-06-27T16:24:25Z
2023-06-27T16:45:38Z
2023-06-27T16:45:36Z
AndreaFrancis
1,777,242,709
Increase chart version
null
Increase chart version:
closed
2023-06-27T15:30:39Z
2023-06-27T15:31:42Z
2023-06-27T15:31:41Z
AndreaFrancis
1,777,018,800
Use specific stemmer by dataset according to the language
Currently, '`porter`' stemmer is used by default for duckdb indexing here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR145 See https://duckdb.org/docs/extensions/full_text_search.html for more details about '`stemmer`' parameter. In the future, we could try to identify the dataset language and use an appropriate stemmer parameter when creating the `fts` index.
Use specific stemmer by dataset according to the language: Currently, '`porter`' stemmer is used by default for duckdb indexing here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR145 See https://duckdb.org/docs/extensions/full_text_search.html for more details about '`stemmer`' parameter. In the future, we could try to identify the dataset language and use an appropriate stemmer parameter when creating the `fts` index.
open
2023-06-27T13:46:44Z
2024-08-22T00:45:07Z
null
AndreaFrancis
1,777,002,036
Prevent using cache_subdirectory=None on JobRunnerWithCache's children
Currently, all job runners that depend on `JobRunnerWithCache` and use the `cache_subdirectory` field, need to do validation before using the generated value like here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR248 We need a better way to prevent using `None` values given that `JobRunnerWithCache` has https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/_job_runner_with_cache.py#L23 `cache_subdirectory: Optional[Path]`. If the `pre_compute` method is not run, job runner children could fail because of non-existing directory. @severo suggested adding a test on each job runner implementation of `JobRunnerWithCache` to ensure the validation is done.
Prevent using cache_subdirectory=None on JobRunnerWithCache's children : Currently, all job runners that depend on `JobRunnerWithCache` and use the `cache_subdirectory` field, need to do validation before using the generated value like here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR248 We need a better way to prevent using `None` values given that `JobRunnerWithCache` has https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/_job_runner_with_cache.py#L23 `cache_subdirectory: Optional[Path]`. If the `pre_compute` method is not run, job runner children could fail because of non-existing directory. @severo suggested adding a test on each job runner implementation of `JobRunnerWithCache` to ensure the validation is done.
open
2023-06-27T13:39:56Z
2023-08-07T16:35:04Z
null
AndreaFrancis
1,776,497,349
refactor: 💡 remove dead code
null
refactor: 💡 remove dead code:
closed
2023-06-27T09:38:35Z
2023-06-27T13:04:02Z
2023-06-27T13:04:01Z
severo
1,776,475,451
Unquote path and revision when copying parquet files
close https://github.com/huggingface/datasets-server/issues/1433
Unquote path and revision when copying parquet files: close https://github.com/huggingface/datasets-server/issues/1433
closed
2023-06-27T09:26:04Z
2023-06-27T12:12:21Z
2023-06-27T12:12:20Z
lhoestq
1,776,453,056
Can't copy parquet files with path that can be URL encoded
we get this error for bigcode/the-stack in config-parquet-and-info ``` huggingface_hub.utils._errors.EntryNotFoundError: Cannot copy data/c%2B%2B/train-00000-of-00214.parquet at revision 349a71353fd5868fb90b593ef09e311379da498a: file is missing on repo. ```
Can't copy parquet files with path that can be URL encoded: we get this error for bigcode/the-stack in config-parquet-and-info ``` huggingface_hub.utils._errors.EntryNotFoundError: Cannot copy data/c%2B%2B/train-00000-of-00214.parquet at revision 349a71353fd5868fb90b593ef09e311379da498a: file is missing on repo. ```
closed
2023-06-27T09:13:06Z
2023-06-27T12:12:21Z
2023-06-27T12:12:21Z
lhoestq
1,775,140,577
feat: 🎸 don't insert a new lock when releasing
null
feat: 🎸 don't insert a new lock when releasing:
closed
2023-06-26T16:16:57Z
2023-06-26T16:17:04Z
2023-06-26T16:17:03Z
severo
1,775,102,927
fix: 🐛 remove the "required" constraint on created_at in Lock
the code does not rely on always having a created_at field. And it's not that easy to always ensure it's filled (see `update(upsert=True, ...`)
fix: 🐛 remove the "required" constraint on created_at in Lock: the code does not rely on always having a created_at field. And it's not that easy to always ensure it's filled (see `update(upsert=True, ...`)
closed
2023-06-26T15:56:40Z
2023-06-26T15:56:47Z
2023-06-26T15:56:46Z
severo
1,775,088,105
Fix auth in rows again
for real this time including a test
Fix auth in rows again: for real this time including a test
closed
2023-06-26T15:47:13Z
2023-06-26T17:23:14Z
2023-06-26T17:23:13Z
lhoestq
1,774,326,535
Ignore duckdb files in parquet and info
needed for https://github.com/huggingface/datasets-server/pull/1296 see https://github.com/huggingface/datasets-server/pull/1296#issuecomment-1604502957
Ignore duckdb files in parquet and info: needed for https://github.com/huggingface/datasets-server/pull/1296 see https://github.com/huggingface/datasets-server/pull/1296#issuecomment-1604502957
closed
2023-06-26T09:22:03Z
2023-06-26T10:40:26Z
2023-06-26T10:40:25Z
lhoestq
1,772,942,026
Rename classes to indicate inheritance
This commit adds `Document` suffix to classes to indicate inheritance. cc @severo Fixes https://github.com/huggingface/datasets-server/issues/1359
Rename classes to indicate inheritance: This commit adds `Document` suffix to classes to indicate inheritance. cc @severo Fixes https://github.com/huggingface/datasets-server/issues/1359
closed
2023-06-24T22:06:58Z
2023-07-01T16:20:40Z
2023-07-01T15:49:33Z
geethika-123
1,771,743,801
Add auth in rows
fix ``` aiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/bigcode/the-stack-dedup/resolve/refs%2Fconvert%2Fparquet/bigcode--the-stack-dedup/parquet-train-00000-of-05140.parquet') ``` when doing pagination on the bigcode/the-stack-dedup which is gated
Add auth in rows: fix ``` aiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/bigcode/the-stack-dedup/resolve/refs%2Fconvert%2Fparquet/bigcode--the-stack-dedup/parquet-train-00000-of-05140.parquet') ``` when doing pagination on the bigcode/the-stack-dedup which is gated
closed
2023-06-23T16:37:55Z
2023-06-26T08:21:51Z
2023-06-26T08:21:50Z
lhoestq
1,771,726,572
Move dtos to its own file
Currently, we have data transfer objects in the `utils.py` file, but this one is growing and growing and sometime we could have duplicated code if don't verify the existing models. This PR just moves all the abstractions of responses to a `dtos.py` file. Most of the files are changes on the `imports` definition.
Move dtos to its own file: Currently, we have data transfer objects in the `utils.py` file, but this one is growing and growing and sometime we could have duplicated code if don't verify the existing models. This PR just moves all the abstractions of responses to a `dtos.py` file. Most of the files are changes on the `imports` definition.
closed
2023-06-23T16:22:40Z
2023-06-26T11:04:14Z
2023-06-26T11:04:12Z
AndreaFrancis
1,771,645,506
Create missing Jobs when /rows cache does not exists yet
Should fix https://github.com/huggingface/datasets-server/issues/1341
Create missing Jobs when /rows cache does not exists yet: Should fix https://github.com/huggingface/datasets-server/issues/1341
closed
2023-06-23T15:22:59Z
2023-06-26T11:01:37Z
2023-06-26T11:01:36Z
AndreaFrancis
1,771,454,880
Fix regression: use parquet metadata when possible
I noticed some datasets have a slow pagination like https://huggingface.co/datasets/mlfoundations/datacomp_1b that times out. This is because there was a regression in https://github.com/huggingface/datasets-server/pull/1287 where it wouldn't use the parquet metadata because `get_best_response` returns the first successful response
Fix regression: use parquet metadata when possible: I noticed some datasets have a slow pagination like https://huggingface.co/datasets/mlfoundations/datacomp_1b that times out. This is because there was a regression in https://github.com/huggingface/datasets-server/pull/1287 where it wouldn't use the parquet metadata because `get_best_response` returns the first successful response
closed
2023-06-23T13:20:08Z
2023-06-27T08:56:57Z
2023-06-23T16:01:53Z
lhoestq
1,771,416,315
Remove too restrictive __all__ definitions
Remove too restrictive `__all__` from `libcommon/simple_cache`: - it contained only one attribute: `DoesNotExist` - Python considers as private all the module attributes which are not defined in `__all__` ``` from libcommon.simple_cache import upsert_response Warning: Accessing a protected member of a class or a module: 'upsert_response' is not declared in __all__ ```
Remove too restrictive __all__ definitions: Remove too restrictive `__all__` from `libcommon/simple_cache`: - it contained only one attribute: `DoesNotExist` - Python considers as private all the module attributes which are not defined in `__all__` ``` from libcommon.simple_cache import upsert_response Warning: Accessing a protected member of a class or a module: 'upsert_response' is not declared in __all__ ```
closed
2023-06-23T12:56:41Z
2023-06-26T13:01:27Z
2023-06-26T13:01:25Z
albertvillanova
1,771,026,707
Update to datasets 2.13.1
This should fix the parquet-and-info job for bigcode/the-stack-dedup. The patch release includes a fix that makes it ignore non data files (in the case of bigcode/the-stack-dedup there's a license.json file that shouldn't be taken into account in the data of course)
Update to datasets 2.13.1: This should fix the parquet-and-info job for bigcode/the-stack-dedup. The patch release includes a fix that makes it ignore non data files (in the case of bigcode/the-stack-dedup there's a license.json file that shouldn't be taken into account in the data of course)
closed
2023-06-23T08:15:55Z
2023-06-23T12:02:32Z
2023-06-23T12:02:30Z
lhoestq
1,769,777,048
/rows returns `null` images for some datasets
Reported in https://github.com/huggingface/datasets/issues/2526 See https://huggingface.co/datasets/lombardata/panoptic_2023_06_22 has a working /first-rows but /rows always returns `null` for images. edit: another one https://huggingface.co/datasets/jonathan-roberts1/RSSCN7
/rows returns `null` images for some datasets: Reported in https://github.com/huggingface/datasets/issues/2526 See https://huggingface.co/datasets/lombardata/panoptic_2023_06_22 has a working /first-rows but /rows always returns `null` for images. edit: another one https://huggingface.co/datasets/jonathan-roberts1/RSSCN7
closed
2023-06-22T14:13:39Z
2023-07-28T12:42:23Z
2023-07-28T12:42:23Z
lhoestq
1,769,268,896
Ensure only one job is started for the same unicity_id
To avoid multiple job runners getting the same job at the same time, for a given unicity_id (identifies job type + parameters): - a lock is used during the update of the selected job - we ensure no other job is already started - we select the newest (in date order) job from all the waiting jobs and start it (status, started_at) - we cancel all the other waiting jobs Should fix #1323.
Ensure only one job is started for the same unicity_id: To avoid multiple job runners getting the same job at the same time, for a given unicity_id (identifies job type + parameters): - a lock is used during the update of the selected job - we ensure no other job is already started - we select the newest (in date order) job from all the waiting jobs and start it (status, started_at) - we cancel all the other waiting jobs Should fix #1323.
closed
2023-06-22T09:10:26Z
2023-06-26T15:37:28Z
2023-06-26T15:37:27Z
severo
1,769,120,320
Use external-secrets to read secrets from AWS
null
Use external-secrets to read secrets from AWS:
closed
2023-06-22T07:37:10Z
2023-06-22T07:51:08Z
2023-06-22T07:51:07Z
rtrompier
1,769,074,534
Create /filter endpoint
Create /filter endpoint. I have tried to follow roughly the same logic as the /rows endpoint. TODO: - [x] e2e tests - [x] chart: only increase the number of replicas - ~~docker-compose files~~ - [x] openapi specification - [x] documentation pages: draft of `filter.mdx' Subsequent PRs: - Index all datasets, not only those with text features. See: - #1854 - Complete the `filter.mdx` docs page - Give examples for all the supported operators - Enumerate all the supported column data types - More complex validation of request parameters (to avoid SQL injection) - Maybe rename the `search` service to `query`, with the 2 endpoints `search` and `filter`
Create /filter endpoint: Create /filter endpoint. I have tried to follow roughly the same logic as the /rows endpoint. TODO: - [x] e2e tests - [x] chart: only increase the number of replicas - ~~docker-compose files~~ - [x] openapi specification - [x] documentation pages: draft of `filter.mdx' Subsequent PRs: - Index all datasets, not only those with text features. See: - #1854 - Complete the `filter.mdx` docs page - Give examples for all the supported operators - Enumerate all the supported column data types - More complex validation of request parameters (to avoid SQL injection) - Maybe rename the `search` service to `query`, with the 2 endpoints `search` and `filter`
closed
2023-06-22T07:10:03Z
2023-10-05T06:49:47Z
2023-10-05T06:49:09Z
albertvillanova
1,768,066,940
keep image and audio untruncated
close https://github.com/huggingface/datasets-server/issues/1416
keep image and audio untruncated: close https://github.com/huggingface/datasets-server/issues/1416
closed
2023-06-21T17:15:27Z
2023-06-22T12:51:41Z
2023-06-22T12:51:40Z
lhoestq
1,768,045,641
Truncated first-rows may crop image URLs
see https://huggingface.co/datasets/Antreas/TALI-base because of that the images are not show in the UI
Truncated first-rows may crop image URLs: see https://huggingface.co/datasets/Antreas/TALI-base because of that the images are not show in the UI
closed
2023-06-21T17:05:29Z
2023-06-22T12:51:41Z
2023-06-22T12:51:41Z
lhoestq
1,767,726,199
Test concurrency in parquet and info
It adds a test on the concurrency of job runners on the step `config-parquet-and-info`(which creates `refs/convert/parquet` and uploads parquet files), to detect when the lock is not respected, leading to `CreateCommitError`. Also: - group all the code that accesses the Hub inside the lock (create the "branch", send the commit, get the list of files) - retry "hf_api.list_repo_refs" on connection error (I saw it during the tests) - adapt the sleep intervals --- First: tests should fail (https://github.com/huggingface/datasets-server/actions/runs/5335283399/jobs/9668277338) ``` FAILED tests/job_runners/config/test_parquet_and_info.py::test_concurrency - huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_DATASETS_SERVER_USER__/n_configs-16873575569236/branch/refs%2Fconvert%2Fparquet (Request ID: Root=1-64930878-4a2840cd4fc36b1752b0eac0) ``` Then, after rebasing after merging https://github.com/huggingface/datasets-server/pull/1414 into main, the test should pass (https://github.com/huggingface/datasets-server/actions/runs/5335527787/jobs/9669124731)
Test concurrency in parquet and info: It adds a test on the concurrency of job runners on the step `config-parquet-and-info`(which creates `refs/convert/parquet` and uploads parquet files), to detect when the lock is not respected, leading to `CreateCommitError`. Also: - group all the code that accesses the Hub inside the lock (create the "branch", send the commit, get the list of files) - retry "hf_api.list_repo_refs" on connection error (I saw it during the tests) - adapt the sleep intervals --- First: tests should fail (https://github.com/huggingface/datasets-server/actions/runs/5335283399/jobs/9668277338) ``` FAILED tests/job_runners/config/test_parquet_and_info.py::test_concurrency - huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_DATASETS_SERVER_USER__/n_configs-16873575569236/branch/refs%2Fconvert%2Fparquet (Request ID: Root=1-64930878-4a2840cd4fc36b1752b0eac0) ``` Then, after rebasing after merging https://github.com/huggingface/datasets-server/pull/1414 into main, the test should pass (https://github.com/huggingface/datasets-server/actions/runs/5335527787/jobs/9669124731)
closed
2023-06-21T14:22:15Z
2023-06-21T15:06:16Z
2023-06-21T15:06:15Z
severo
1,767,622,888
Fix the lock
Several processes were able to acquire the lock at the same time
Fix the lock: Several processes were able to acquire the lock at the same time
closed
2023-06-21T13:36:15Z
2023-06-21T14:25:16Z
2023-06-21T14:25:15Z
severo
1,767,394,602
The e2e tests have implicit dependencies
As reported by @albertvillanova for example, the following test does not pass ``` $ TEST_PATH=tests/test_11_api.py::test_rows_endpoint make test ``` while this ones pass: ``` $ TEST_PATH="tests/test_11_api.py::test_endpoint tests/test_11_api.py::test_rows_endpoint" make test ``` It's because `test_rows_endpoint` requires the parquet files to exist, and we implicitly rely on the fact that they have been created in `tests/test_11_api.py::test_endpoint`. Other implicit dependency is when we test `is-valid` and `valid`.
The e2e tests have implicit dependencies: As reported by @albertvillanova for example, the following test does not pass ``` $ TEST_PATH=tests/test_11_api.py::test_rows_endpoint make test ``` while this ones pass: ``` $ TEST_PATH="tests/test_11_api.py::test_endpoint tests/test_11_api.py::test_rows_endpoint" make test ``` It's because `test_rows_endpoint` requires the parquet files to exist, and we implicitly rely on the fact that they have been created in `tests/test_11_api.py::test_endpoint`. Other implicit dependency is when we test `is-valid` and `valid`.
open
2023-06-21T11:38:20Z
2023-08-15T15:13:24Z
null
severo
1,767,253,537
unblock DFKI-SLT/few-nerd
null
unblock DFKI-SLT/few-nerd:
closed
2023-06-21T10:12:26Z
2023-06-21T14:21:58Z
2023-06-21T14:21:57Z
lhoestq
1,767,202,799
Split-names-from-streaming is incorrect
e.g. only returns ["test"] for [Antreas/TALI-base](https://huggingface.co/datasets/Antreas/TALI-base) instead of ['train', 'test', 'val']
Split-names-from-streaming is incorrect: e.g. only returns ["test"] for [Antreas/TALI-base](https://huggingface.co/datasets/Antreas/TALI-base) instead of ['train', 'test', 'val']
closed
2023-06-21T09:44:29Z
2023-06-21T09:50:13Z
2023-06-21T09:50:13Z
lhoestq
1,766,207,317
fix: 🐛 try to ensure only one job can get the lock
As it's difficult to test on multiple MongoDB replicas (we should do it at one point), I'll try to: - merge to main - deploy on prod - refresh the dataset severo/flores_101 - see if all the configs could be processed for the parquet creation, or if we still have an error for "half" of them
fix: 🐛 try to ensure only one job can get the lock: As it's difficult to test on multiple MongoDB replicas (we should do it at one point), I'll try to: - merge to main - deploy on prod - refresh the dataset severo/flores_101 - see if all the configs could be processed for the parquet creation, or if we still have an error for "half" of them
closed
2023-06-20T21:20:23Z
2023-06-20T21:36:30Z
2023-06-20T21:26:59Z
severo
1,766,092,638
test: 💍 add a test on lock.git_branch
null
test: 💍 add a test on lock.git_branch:
closed
2023-06-20T20:19:16Z
2023-06-20T20:35:13Z
2023-06-20T20:35:11Z
severo
1,765,983,796
Add tests on create_commits
see #1396
Add tests on create_commits: see #1396
closed
2023-06-20T18:59:08Z
2023-06-20T19:08:41Z
2023-06-20T19:08:40Z
severo
1,765,621,130
Use EFS instead of NFS for datasets and parquet "local" cache
(and duckdb local cache in the PRs) related to https://github.com/huggingface/datasets-server/issues/1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
Use EFS instead of NFS for datasets and parquet "local" cache: (and duckdb local cache in the PRs) related to https://github.com/huggingface/datasets-server/issues/1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
closed
2023-06-20T15:12:02Z
2023-08-11T13:48:34Z
2023-08-11T13:48:34Z
severo
1,765,618,183
Use S3 + cloudfront for assets and cached-assets
related to #1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
Use S3 + cloudfront for assets and cached-assets: related to #1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
closed
2023-06-20T15:10:24Z
2023-10-09T17:54:03Z
2023-10-09T17:54:03Z
severo
1,765,617,336
Modify TTL index condition
Adding a condition TTL index in Job document. It will only delete those records with a final state of SUCCESS, ERROR or CANCELLED
Modify TTL index condition: Adding a condition TTL index in Job document. It will only delete those records with a final state of SUCCESS, ERROR or CANCELLED
closed
2023-06-20T15:09:53Z
2023-06-20T15:20:00Z
2023-06-20T15:19:59Z
AndreaFrancis
1,765,596,874
Retry get_parquet_file_and_size
In prod I got an ArrowInvalid when instantiating a pq.ParquetFile for bigcode/the-stack-dedup even though all the parquet files are valid (I ran a script and checked I could get all the pq.ParquetFile objects)
Retry get_parquet_file_and_size: In prod I got an ArrowInvalid when instantiating a pq.ParquetFile for bigcode/the-stack-dedup even though all the parquet files are valid (I ran a script and checked I could get all the pq.ParquetFile objects)
closed
2023-06-20T14:59:10Z
2023-06-21T09:54:13Z
2023-06-21T09:54:12Z
lhoestq
1,765,568,744
Temporaly remove TTL index
Like https://github.com/huggingface/datasets-server/commit/47ea65b2567db4482579cd7000393cf0a15b412e , firs step to modify TTL index is to remove it from the code, then a deploy is needed.
Temporaly remove TTL index: Like https://github.com/huggingface/datasets-server/commit/47ea65b2567db4482579cd7000393cf0a15b412e , firs step to modify TTL index is to remove it from the code, then a deploy is needed.
closed
2023-06-20T14:46:05Z
2023-06-20T14:59:09Z
2023-06-20T14:59:07Z
AndreaFrancis
1,765,455,044
Update docs with hub parquet endpoint
Wait for (internal)https://github.com/huggingface/moon-landing/pull/6695 to be merged and deployed close https://github.com/huggingface/datasets-server/issues/1400
Update docs with hub parquet endpoint: Wait for (internal)https://github.com/huggingface/moon-landing/pull/6695 to be merged and deployed close https://github.com/huggingface/datasets-server/issues/1400
closed
2023-06-20T13:49:22Z
2023-07-18T15:54:08Z
2023-07-18T15:53:36Z
lhoestq
1,765,413,674
fix: 🐛 split the default value to get a list of strings
null
fix: 🐛 split the default value to get a list of strings:
closed
2023-06-20T13:28:14Z
2023-06-20T13:28:28Z
2023-06-20T13:28:26Z
severo
1,765,400,845
Update docs for hf.co/api/datasets/<dataset>/parquet endpoint
To be used instead of the datasets server /parquet endpoint in examples following https://github.com/huggingface/moon-landing/pull/6695
Update docs for hf.co/api/datasets/<dataset>/parquet endpoint: To be used instead of the datasets server /parquet endpoint in examples following https://github.com/huggingface/moon-landing/pull/6695
closed
2023-06-20T13:21:04Z
2023-07-19T12:02:37Z
2023-07-19T12:02:37Z
lhoestq
1,765,398,521
admin-UI stuck for datasets with many configs/splits
For example, https://huggingface.co/spaces/datasets-maintainers/datasets-server-admin-ui with `gsarti/flores_101` on "Dataset status" tab takes a lot of time.
admin-UI stuck for datasets with many configs/splits: For example, https://huggingface.co/spaces/datasets-maintainers/datasets-server-admin-ui with `gsarti/flores_101` on "Dataset status" tab takes a lot of time.
closed
2023-06-20T13:19:48Z
2024-02-06T14:39:15Z
2024-02-06T14:39:14Z
severo
1,765,309,469
Rename parent job runners
Based on the new job runner for those that need a cached directory on https://github.com/huggingface/datasets-server/pull/1388 We will need a new split job runner that inherits from the new JobRunnerWithCache to be used as part of https://github.com/huggingface/datasets-server/pull/1199 and https://github.com/huggingface/datasets-server/pull/1296/files. According to the discussions here, I renamed the parent job runners to: > CacheDirectoryJobRunner -> JobRunnerWithCache > DatasetsBasedJobRunner -> JobRunnerWithDatasetsCache And their children with Dataset/Config/Split + JobRunnerWithCache/JobRunnerWithDatasetsCache
Rename parent job runners: Based on the new job runner for those that need a cached directory on https://github.com/huggingface/datasets-server/pull/1388 We will need a new split job runner that inherits from the new JobRunnerWithCache to be used as part of https://github.com/huggingface/datasets-server/pull/1199 and https://github.com/huggingface/datasets-server/pull/1296/files. According to the discussions here, I renamed the parent job runners to: > CacheDirectoryJobRunner -> JobRunnerWithCache > DatasetsBasedJobRunner -> JobRunnerWithDatasetsCache And their children with Dataset/Config/Split + JobRunnerWithCache/JobRunnerWithDatasetsCache
closed
2023-06-20T12:31:31Z
2023-06-20T18:43:09Z
2023-06-20T18:43:08Z
AndreaFrancis
1,765,303,711
Ensure the parquet files in /parquet are sorted by "shard" index
And tell it in the docs
Ensure the parquet files in /parquet are sorted by "shard" index: And tell it in the docs
closed
2023-06-20T12:28:36Z
2023-06-30T20:02:14Z
2023-06-30T20:02:14Z
severo
1,765,292,247
Avoid commit conflicts
See https://github.com/huggingface/datasets-server/issues/1163#issuecomment-1598504866 and following comments - [x] add tests to check if `parent_commit=parent_commit if not commit_infos else commit_infos[-1].oid` is correct. Yes; see https://github.com/huggingface/datasets-server/pull/1408 - [x] add write/read concern for the lock requests to ensure consistency - the [default](https://www.mongodb.com/docs/manual/reference/write-concern/#implicit-default-write-concern) is already write_concern = "majority", so we shouldn't need to change this. - [ ] ???
Avoid commit conflicts: See https://github.com/huggingface/datasets-server/issues/1163#issuecomment-1598504866 and following comments - [x] add tests to check if `parent_commit=parent_commit if not commit_infos else commit_infos[-1].oid` is correct. Yes; see https://github.com/huggingface/datasets-server/pull/1408 - [x] add write/read concern for the lock requests to ensure consistency - the [default](https://www.mongodb.com/docs/manual/reference/write-concern/#implicit-default-write-concern) is already write_concern = "majority", so we shouldn't need to change this. - [ ] ???
closed
2023-06-20T12:23:06Z
2023-06-21T15:04:04Z
2023-06-21T15:04:04Z
severo
1,765,084,367
Increase max job duration
Currently bigcode/the-stack-dedup seems to take more than 20min to copy the parquet files. This is mostly to test that it works - we can decide later if we keep this value or if we need to make this value depend on the job.
Increase max job duration: Currently bigcode/the-stack-dedup seems to take more than 20min to copy the parquet files. This is mostly to test that it works - we can decide later if we keep this value or if we need to make this value depend on the job.
closed
2023-06-20T10:11:37Z
2023-06-20T13:52:23Z
2023-06-20T13:52:22Z
lhoestq
1,765,010,399
Remove unused code from /rows API endpoint
Remove unused code from /rows API endpoint.
Remove unused code from /rows API endpoint: Remove unused code from /rows API endpoint.
closed
2023-06-20T09:26:25Z
2023-06-21T14:22:46Z
2023-06-21T14:22:44Z
albertvillanova
1,764,960,588
Raise retryable error on hfhubhttperror
see https://github.com/huggingface/datasets-server/issues/1163
Raise retryable error on hfhubhttperror: see https://github.com/huggingface/datasets-server/issues/1163
closed
2023-06-20T08:58:43Z
2023-06-20T12:36:46Z
2023-06-20T12:36:45Z
severo
1,763,437,683
feat: 🎸 10x the size of supported images
null
feat: 🎸 10x the size of supported images:
closed
2023-06-19T12:27:34Z
2023-06-19T12:36:01Z
2023-06-19T12:36:00Z
severo
1,762,949,715
Fix typo in error message
I already suggested this typo fix: - https://github.com/huggingface/datasets-server/pull/1371/files#r1230650727 while reviewing PR: - #1371 And normally it was taken into account with commit: - https://github.com/huggingface/datasets-server/pull/1371/commits/1c55c9e6d55e178062eb6b85c33e7c6e71dc13ec However a subsequent force-push removed this commit: - https://github.com/huggingface/datasets-server/compare/1c55c9e6d55e178062eb6b85c33e7c6e71dc13ec..d29012b12274ea9d58d3a6c3cdbb2dd64097f202
Fix typo in error message: I already suggested this typo fix: - https://github.com/huggingface/datasets-server/pull/1371/files#r1230650727 while reviewing PR: - #1371 And normally it was taken into account with commit: - https://github.com/huggingface/datasets-server/pull/1371/commits/1c55c9e6d55e178062eb6b85c33e7c6e71dc13ec However a subsequent force-push removed this commit: - https://github.com/huggingface/datasets-server/compare/1c55c9e6d55e178062eb6b85c33e7c6e71dc13ec..d29012b12274ea9d58d3a6c3cdbb2dd64097f202
closed
2023-06-19T07:45:23Z
2023-06-19T08:55:14Z
2023-06-19T08:55:12Z
albertvillanova
1,762,411,257
Add Docker internal to extra_hosts
This is required to connect to the local DB instance on Linux; it is already added to `tools/docker-compose-dev-datasets-server.yml`
Add Docker internal to extra_hosts: This is required to connect to the local DB instance on Linux; it is already added to `tools/docker-compose-dev-datasets-server.yml`
closed
2023-06-18T18:26:33Z
2023-06-19T10:39:36Z
2023-06-19T10:39:36Z
baskrahmer
1,762,407,543
Small typos
Fix closing brackets and GH action link
Small typos: Fix closing brackets and GH action link
closed
2023-06-18T18:19:07Z
2023-06-19T08:51:26Z
2023-06-19T08:51:25Z
baskrahmer
1,761,088,254
New parent job runner for cached data
Currently, we have datasets_based_job_runner but we need a new one that only creates a chance folder without modifying the datasets library config. Context https://github.com/huggingface/datasets-server/pull/1296#discussion_r1232512427
New parent job runner for cached data: Currently, we have datasets_based_job_runner but we need a new one that only creates a chance folder without modifying the datasets library config. Context https://github.com/huggingface/datasets-server/pull/1296#discussion_r1232512427
closed
2023-06-16T18:05:01Z
2023-06-20T12:21:33Z
2023-06-20T12:21:32Z
AndreaFrancis
1,761,052,990
fix: 🐛 support bigger images
fixes https://github.com/huggingface/datasets-server/issues/1361
fix: 🐛 support bigger images: fixes https://github.com/huggingface/datasets-server/issues/1361
closed
2023-06-16T17:40:57Z
2023-06-19T11:21:41Z
2023-06-19T11:21:40Z
severo
1,760,975,012
Detect flaky hosting platforms and propose to host on the Hub
Many datasets are hosted on Zenodo or GDrive, and loaded using a loading script. But we have a lot of issues with them, it's not very reliable. @albertvillanova has fixed a lot of them, see https://huggingface.co/datasets/medal/discussions/2#648856b01927b18ced79d8b7 for example. In case of errors, we could detect if the hosting platform is the issue, and create a specific message to propose the user to move the hosting to the Hub.
Detect flaky hosting platforms and propose to host on the Hub: Many datasets are hosted on Zenodo or GDrive, and loaded using a loading script. But we have a lot of issues with them, it's not very reliable. @albertvillanova has fixed a lot of them, see https://huggingface.co/datasets/medal/discussions/2#648856b01927b18ced79d8b7 for example. In case of errors, we could detect if the hosting platform is the issue, and create a specific message to propose the user to move the hosting to the Hub.
closed
2023-06-16T16:39:23Z
2024-06-19T14:15:38Z
2024-06-19T14:15:38Z
severo
1,760,678,709
Uncaught error on config-parquet-and-info on big datasets
For some datasets that require copying original parquet files to `refs/convert/parquet` in multiple commits under a lock (see https://github.com/huggingface/datasets-server/issues/1349), we get: https://datasets-server.huggingface.co/parquet?dataset=bigcode/the-stack&config=bigcode--the-stack https://datasets-server.huggingface.co/parquet?dataset=lhoestq/tmp-lots-of-lfs-files&config=lhoestq--tmp-lots-of-lfs-files ``` {"error":"Give up after 6 attempts with <class 'huggingface_hub.utils._errors.EntryNotFoundError'>"} ``` with error_code: `UnexpectedError`
Uncaught error on config-parquet-and-info on big datasets: For some datasets that require copying original parquet files to `refs/convert/parquet` in multiple commits under a lock (see https://github.com/huggingface/datasets-server/issues/1349), we get: https://datasets-server.huggingface.co/parquet?dataset=bigcode/the-stack&config=bigcode--the-stack https://datasets-server.huggingface.co/parquet?dataset=lhoestq/tmp-lots-of-lfs-files&config=lhoestq--tmp-lots-of-lfs-files ``` {"error":"Give up after 6 attempts with <class 'huggingface_hub.utils._errors.EntryNotFoundError'>"} ``` with error_code: `UnexpectedError`
closed
2023-06-16T13:49:29Z
2023-07-17T16:40:53Z
2023-07-17T16:40:52Z
severo
1,760,673,656
Uncaught error in /rows on big datasets
For some datasets, for which the parquet files have been uploaded (or copied) with multiple commits (see https://github.com/huggingface/datasets-server/issues/1349) like `atom-in-the-universe/zlib-books-1k-50k`: ``` tiiuae/falcon-refinedweb marianna13/zlib-books-1k-500k atom-in-the-universe/zlib-books-1k-50k atom-in-the-universe/zlib-books-1k-100k atom-in-the-universe/zlib-books-1k-500k atom-in-the-universe/zlib-books-1k-1000k mlfoundations/datacomp_1b ``` the parquet files exist, all the steps are OK, but we get an uncaught error on /rows (in services/api): https://datasets-server.huggingface.co/rows?dataset=atom-in-the-universe/zlib-books-1k-50k&config=atom-in-the-universe--zlib-books-1k-50k&split=train&offset=500&length=100 ``` {"error":"Unexpected error."} ```
Uncaught error in /rows on big datasets: For some datasets, for which the parquet files have been uploaded (or copied) with multiple commits (see https://github.com/huggingface/datasets-server/issues/1349) like `atom-in-the-universe/zlib-books-1k-50k`: ``` tiiuae/falcon-refinedweb marianna13/zlib-books-1k-500k atom-in-the-universe/zlib-books-1k-50k atom-in-the-universe/zlib-books-1k-100k atom-in-the-universe/zlib-books-1k-500k atom-in-the-universe/zlib-books-1k-1000k mlfoundations/datacomp_1b ``` the parquet files exist, all the steps are OK, but we get an uncaught error on /rows (in services/api): https://datasets-server.huggingface.co/rows?dataset=atom-in-the-universe/zlib-books-1k-50k&config=atom-in-the-universe--zlib-books-1k-50k&split=train&offset=500&length=100 ``` {"error":"Unexpected error."} ```
closed
2023-06-16T13:46:20Z
2023-07-17T16:40:28Z
2023-07-17T16:40:28Z
severo
1,760,254,971
Rename dev to staging, and use staging mongodb cluster
null
Rename dev to staging, and use staging mongodb cluster:
closed
2023-06-16T09:22:22Z
2023-06-19T12:12:20Z
2023-06-19T12:12:18Z
severo
1,760,249,622
Change "dev" environment to "staging"
It makes more sense to call it "staging". And it will use the mongo atlas staging cluster
Change "dev" environment to "staging": It makes more sense to call it "staging". And it will use the mongo atlas staging cluster
closed
2023-06-16T09:18:33Z
2023-06-20T18:05:49Z
2023-06-20T18:05:49Z
severo
1,760,105,155
Upgrade prod mongo from v5 to v6
This is needed for `$in` function in TTL index: https://github.com/huggingface/datasets-server/pull/1325/files#diff-44fa7cb2645881e55953db64dafa198b2e007a2e531f70acaeebfc50ffa67953R141 See https://www.mongodb.com/docs/manual/release-notes/6.0/#indexes --- to upgrade: https://www.mongodb.com/docs/atlas/tutorial/major-version-change/
Upgrade prod mongo from v5 to v6: This is needed for `$in` function in TTL index: https://github.com/huggingface/datasets-server/pull/1325/files#diff-44fa7cb2645881e55953db64dafa198b2e007a2e531f70acaeebfc50ffa67953R141 See https://www.mongodb.com/docs/manual/release-notes/6.0/#indexes --- to upgrade: https://www.mongodb.com/docs/atlas/tutorial/major-version-change/
closed
2023-06-16T07:37:29Z
2023-06-20T15:08:11Z
2023-06-20T15:08:11Z
severo
1,760,089,716
Revert "Delete ttl index from queue.py code (#1378)"
This reverts commit 47ea65b2567db4482579cd7000393cf0a15b412e.
Revert "Delete ttl index from queue.py code (#1378)": This reverts commit 47ea65b2567db4482579cd7000393cf0a15b412e.
closed
2023-06-16T07:29:39Z
2023-06-16T07:29:55Z
2023-06-16T07:29:54Z
severo
1,759,657,559
Rollback TTL index
null
Rollback TTL index:
closed
2023-06-15T23:08:40Z
2023-06-15T23:20:25Z
2023-06-15T23:20:24Z
AndreaFrancis
1,759,573,668
Delete ttl index from queue.py code
First part of https://github.com/huggingface/datasets-server/issues/1326
Delete ttl index from queue.py code: First part of https://github.com/huggingface/datasets-server/issues/1326
closed
2023-06-15T21:28:16Z
2023-06-15T22:08:08Z
2023-06-15T22:08:07Z
AndreaFrancis
1,759,467,470
[docs] Add build notebook workflow
Enables the doc-builder to build Colab notebooks :)
[docs] Add build notebook workflow: Enables the doc-builder to build Colab notebooks :)
closed
2023-06-15T19:58:06Z
2023-06-15T20:22:51Z
2023-06-15T20:22:50Z
stevhliu
1,759,417,823
[docs] Improvements
Based on @mishig25's [feedback](https://huggingface.slack.com/archives/C0311GZ7R6K/p1684484245379859), this adds: - a response to the code snippets in the Quickstart - an end-to-end example of using `/parquet` to get a dataset, analyze it, and plot the results - button to open in a Colab notebook to run examples right away
[docs] Improvements: Based on @mishig25's [feedback](https://huggingface.slack.com/archives/C0311GZ7R6K/p1684484245379859), this adds: - a response to the code snippets in the Quickstart - an end-to-end example of using `/parquet` to get a dataset, analyze it, and plot the results - button to open in a Colab notebook to run examples right away
closed
2023-06-15T19:23:55Z
2023-06-16T16:10:35Z
2023-06-16T16:10:04Z
stevhliu
1,759,223,139
Fix fill_builder_info
close https://github.com/huggingface/datasets-server/issues/1374 NamedSplit peut pas être convertit par orjson
Fix fill_builder_info: close https://github.com/huggingface/datasets-server/issues/1374 NamedSplit peut pas être convertit par orjson
closed
2023-06-15T16:57:35Z
2023-06-15T20:37:36Z
2023-06-15T20:37:35Z
lhoestq
1,759,049,916
Truncated cells seem to prevent conversion to parquet
See https://github.com/huggingface/datasets-server/pull/1372#issuecomment-1593249655 ``` Traceback (most recent call last): File "/src/services/worker/src/worker/job_manager.py", line 167, in process if len(orjson_dumps(content)) > self.worker_config.content_max_bytes: File "/src/libs/libcommon/src/libcommon/utils.py", line 79, in orjson_dumps return orjson.dumps(content, option=orjson.OPT_UTC_Z, default=orjson_default) TypeError: Dict key must be str ``` on https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema_12_06
Truncated cells seem to prevent conversion to parquet: See https://github.com/huggingface/datasets-server/pull/1372#issuecomment-1593249655 ``` Traceback (most recent call last): File "/src/services/worker/src/worker/job_manager.py", line 167, in process if len(orjson_dumps(content)) > self.worker_config.content_max_bytes: File "/src/libs/libcommon/src/libcommon/utils.py", line 79, in orjson_dumps return orjson.dumps(content, option=orjson.OPT_UTC_Z, default=orjson_default) TypeError: Dict key must be str ``` on https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema_12_06
closed
2023-06-15T15:11:19Z
2023-06-15T20:37:36Z
2023-06-15T20:37:36Z
severo
1,758,633,320
Refac hub_datasets fixture
This way no need to setup all the hub datasets fixtures just to run one single test close #921
Refac hub_datasets fixture: This way no need to setup all the hub datasets fixtures just to run one single test close #921
closed
2023-06-15T11:35:02Z
2023-06-15T20:53:44Z
2023-06-15T20:53:43Z
lhoestq
1,758,454,961
Update datasets dependency to 2.13.0 version
After 2.13.0 datasets release, update dependencies on it. Note that I have also removed the explicit dependency on `datasets` from `services/api`, - see commit: https://github.com/huggingface/datasets-server/commit/a2c0cd908b45a7065d936964c4d0477143146d6c This is analogous to what was previously done on `services/worker`. - See discussion: https://github.com/huggingface/datasets-server/pull/1147#issuecomment-1539883057 - See commit: https://github.com/huggingface/datasets-server/pull/1147/commits/4163a18ec561ddf93f93c096d76c2b06cf652f4d Fix #1370.
Update datasets dependency to 2.13.0 version: After 2.13.0 datasets release, update dependencies on it. Note that I have also removed the explicit dependency on `datasets` from `services/api`, - see commit: https://github.com/huggingface/datasets-server/commit/a2c0cd908b45a7065d936964c4d0477143146d6c This is analogous to what was previously done on `services/worker`. - See discussion: https://github.com/huggingface/datasets-server/pull/1147#issuecomment-1539883057 - See commit: https://github.com/huggingface/datasets-server/pull/1147/commits/4163a18ec561ddf93f93c096d76c2b06cf652f4d Fix #1370.
closed
2023-06-15T09:48:39Z
2023-06-15T20:49:23Z
2023-06-15T15:59:03Z
albertvillanova
1,757,309,822
Adding limit for number of configs
Closes https://github.com/huggingface/datasets-server/issues/1367
Adding limit for number of configs: Closes https://github.com/huggingface/datasets-server/issues/1367
closed
2023-06-14T16:56:59Z
2023-06-15T14:58:43Z
2023-06-15T14:58:42Z
AndreaFrancis
1,757,272,674
Update datasets to 2.13.0
https://github.com/huggingface/datasets/releases/tag/2.13.0 Related to the datasets server: - Better row group size in push_to_hub by @lhoestq in https://github.com/huggingface/datasets/pull/5935 - Make get_from_cache use custom temp filename that is locked by @albertvillanova in https://github.com/huggingface/datasets/pull/5938 Did I miss something @huggingface/datasets-server ?
Update datasets to 2.13.0: https://github.com/huggingface/datasets/releases/tag/2.13.0 Related to the datasets server: - Better row group size in push_to_hub by @lhoestq in https://github.com/huggingface/datasets/pull/5935 - Make get_from_cache use custom temp filename that is locked by @albertvillanova in https://github.com/huggingface/datasets/pull/5938 Did I miss something @huggingface/datasets-server ?
closed
2023-06-14T16:31:24Z
2023-06-15T15:59:05Z
2023-06-15T15:59:05Z
severo
1,757,265,256
Remove duplicates in cache
We have cache entries for `bigscience/P3` and `BigScience/P3` for example: they resolve to the same dataset.
Remove duplicates in cache: We have cache entries for `bigscience/P3` and `BigScience/P3` for example: they resolve to the same dataset.
closed
2023-06-14T16:26:07Z
2023-07-17T16:41:02Z
2023-07-17T16:41:02Z
severo
1,757,157,046
feat: 🎸 reduce the resources
null
feat: 🎸 reduce the resources:
closed
2023-06-14T15:23:53Z
2023-06-14T15:25:10Z
2023-06-14T15:25:09Z
severo
1,757,098,647
Set a limit on the number of configs
Dataset https://huggingface.co/datasets/Muennighoff/flores200 has more than 40,000 configs. It's too much for our infrastructure for now. We should set a limit on it.
Set a limit on the number of configs: Dataset https://huggingface.co/datasets/Muennighoff/flores200 has more than 40,000 configs. It's too much for our infrastructure for now. We should set a limit on it.
closed
2023-06-14T14:55:05Z
2023-06-15T14:58:44Z
2023-06-15T14:58:43Z
severo