id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
β | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,266,160,719 | fix: π don't mark empty splits as stalled | should fix #185 and #177 | fix: π don't mark empty splits as stalled: should fix #185 and #177 | closed | 2022-06-09T13:46:04Z | 2022-06-10T09:31:47Z | 2022-06-10T09:31:46Z | severo |
1,264,802,000 | feat: πΈ store the rows content as a JSON string | For the new splits stored in the cache database, the rows content are
converted to a JSON string. The return type of get_rows_response has
changed: it's a 4-tuple, with optional json_content_rows and
content_rows.
BREAKING CHANGE: 𧨠return type of get_rows_response has changed | feat: πΈ store the rows content as a JSON string: For the new splits stored in the cache database, the rows content are
converted to a JSON string. The return type of get_rows_response has
changed: it's a 4-tuple, with optional json_content_rows and
content_rows.
BREAKING CHANGE: 𧨠return type of get_rows_response has changed | closed | 2022-06-08T14:00:27Z | 2022-06-14T14:55:08Z | 2022-06-14T14:55:05Z | severo |
1,264,742,710 | Store the JSON response in mongo, not the native objects | For example, trying to store a row with a Timestamp field will generate an error:
https://github.com/huggingface/datasets/issues/4413
> Type is not JSON serializable: Timestamp
See https://stackoverflow.com/questions/50404559/python-error-typeerror-object-of-type-timestamp-is-not-json-serializable
Options to solve it:
1. convert to JSON (using [`orjson_dumps`](https://github.com/huggingface/datasets-server/blob/7444bd367cf391d849b529d31eed4730d2d2b405/libs/libutils/src/libutils/utils.py#L56-L64) as the service `api` is currently doing). It would be the best way: the API would only serve a precomputed JSON, instead of doing the conversion on every request, which is redundant and thus reduce the interest of the cache. The issue is that it requires migrating the database, which requires good preparation.
2. convert to JSON, then convert back to a native dict, before storing into the mongo database. It's really not optimal since the data is converted, then converted back, then stored, then converted again on every request. But it's a lot easier to implement since no migration is required.
3. create a new optional field to the mongo cached splits collection: `json_rows_response`. If present, return it, else: return `rows_response`. We would have to give a flag to the caller to know if the response is already in JSON or not. And the worker that fills the database would now only fill `json_rows_response`. This way, we have more time to migrate the database, and we can do it more easily by first filling the missing `json_rows_response`, then deprecating `rows_response`. | Store the JSON response in mongo, not the native objects: For example, trying to store a row with a Timestamp field will generate an error:
https://github.com/huggingface/datasets/issues/4413
> Type is not JSON serializable: Timestamp
See https://stackoverflow.com/questions/50404559/python-error-typeerror-object-of-type-timestamp-is-not-json-serializable
Options to solve it:
1. convert to JSON (using [`orjson_dumps`](https://github.com/huggingface/datasets-server/blob/7444bd367cf391d849b529d31eed4730d2d2b405/libs/libutils/src/libutils/utils.py#L56-L64) as the service `api` is currently doing). It would be the best way: the API would only serve a precomputed JSON, instead of doing the conversion on every request, which is redundant and thus reduce the interest of the cache. The issue is that it requires migrating the database, which requires good preparation.
2. convert to JSON, then convert back to a native dict, before storing into the mongo database. It's really not optimal since the data is converted, then converted back, then stored, then converted again on every request. But it's a lot easier to implement since no migration is required.
3. create a new optional field to the mongo cached splits collection: `json_rows_response`. If present, return it, else: return `rows_response`. We would have to give a flag to the caller to know if the response is already in JSON or not. And the worker that fills the database would now only fill `json_rows_response`. This way, we have more time to migrate the database, and we can do it more easily by first filling the missing `json_rows_response`, then deprecating `rows_response`. | closed | 2022-06-08T13:21:35Z | 2022-06-09T13:33:05Z | 2022-06-09T13:33:05Z | severo |
1,264,497,978 | Allow none path in audio | null | Allow none path in audio: | closed | 2022-06-08T10:00:33Z | 2022-06-08T12:24:50Z | 2022-06-08T12:24:49Z | severo |
1,264,453,142 | fix: π use a new name for the numba cache preparation | it was ignored since it had the same name as for the datasets cache | fix: π use a new name for the numba cache preparation: it was ignored since it had the same name as for the datasets cache | closed | 2022-06-08T09:24:11Z | 2022-06-08T09:24:16Z | 2022-06-08T09:24:15Z | severo |
1,264,438,724 | fix: π ensure the NUMBA_CACHE_DIR is set | it's needed for librosa on a cloud infrastructure. See
https://stackoverflow.com/a/63367171/7351594. Related to
https://github.com/huggingface/datasets/issues/4363. | fix: π ensure the NUMBA_CACHE_DIR is set: it's needed for librosa on a cloud infrastructure. See
https://stackoverflow.com/a/63367171/7351594. Related to
https://github.com/huggingface/datasets/issues/4363. | closed | 2022-06-08T09:12:24Z | 2022-06-08T09:13:09Z | 2022-06-08T09:13:08Z | severo |
1,264,386,120 | feat: πΈ use the new certificate | see https://github.com/huggingface/datasets-server/issues/319 | feat: πΈ use the new certificate: see https://github.com/huggingface/datasets-server/issues/319 | closed | 2022-06-08T08:29:21Z | 2022-06-08T08:29:31Z | 2022-06-08T08:29:30Z | severo |
1,264,359,094 | fix: π adapt the pods resources | we cannot use more than a node's resources | fix: π adapt the pods resources: we cannot use more than a node's resources | closed | 2022-06-08T08:06:39Z | 2022-06-08T08:06:45Z | 2022-06-08T08:06:44Z | severo |
1,263,709,060 | feat: πΈ update the resources by trial and error | null | feat: πΈ update the resources by trial and error: | closed | 2022-06-07T18:34:41Z | 2022-06-08T07:47:57Z | 2022-06-08T07:47:56Z | severo |
1,263,693,813 | feat: πΈ increase resources for the workers | null | feat: πΈ increase resources for the workers: | closed | 2022-06-07T18:19:26Z | 2022-06-07T18:19:49Z | 2022-06-07T18:19:48Z | severo |
1,263,648,240 | feat: πΈ update images | null | feat: πΈ update images: | closed | 2022-06-07T17:37:09Z | 2022-06-07T17:37:20Z | 2022-06-07T17:37:19Z | severo |
1,263,643,591 | Revert "Fix worker (#354)" | This reverts commit bc8a0f906b8485b9f7debf82048f2779d0793a32. | Revert "Fix worker (#354)": This reverts commit bc8a0f906b8485b9f7debf82048f2779d0793a32. | closed | 2022-06-07T17:33:30Z | 2022-06-07T17:36:43Z | 2022-06-07T17:36:43Z | severo |
1,263,623,948 | Fix worker | null | Fix worker: | closed | 2022-06-07T17:16:54Z | 2022-06-07T17:31:47Z | 2022-06-07T17:31:23Z | severo |
1,263,530,977 | feat: πΈ upgrade libqueue and libcache | hopefully it fixes the issue "Internal Server Error" when calling
https://datasets-server.huggingface.co/queue-dump-waiting-started | feat: πΈ upgrade libqueue and libcache: hopefully it fixes the issue "Internal Server Error" when calling
https://datasets-server.huggingface.co/queue-dump-waiting-started | closed | 2022-06-07T16:06:00Z | 2022-06-07T16:11:55Z | 2022-06-07T16:09:39Z | severo |
1,263,449,224 | Remove the datasets blocklist and re-enqueue server errors | null | Remove the datasets blocklist and re-enqueue server errors: | closed | 2022-06-07T15:04:19Z | 2022-06-07T15:41:27Z | 2022-06-07T15:41:26Z | severo |
1,263,173,054 | feat: πΈ remove old domain datasets-server.huggingface.tech | also fix some docs | feat: πΈ remove old domain datasets-server.huggingface.tech: also fix some docs | closed | 2022-06-07T11:54:52Z | 2022-06-07T12:39:50Z | 2022-06-07T12:39:49Z | severo |
1,259,935,640 | ci: π‘ add missing secrets | null | ci: π‘ add missing secrets: | closed | 2022-06-03T13:44:05Z | 2022-06-03T13:52:16Z | 2022-06-03T13:52:16Z | severo |
1,259,920,792 | ci: π‘ fix missing replace | null | ci: π‘ fix missing replace: | closed | 2022-06-03T13:30:35Z | 2022-06-03T13:52:43Z | 2022-06-03T13:52:42Z | severo |
1,259,908,258 | ci: π‘ checkout the repo before accessing a file | null | ci: π‘ checkout the repo before accessing a file: | closed | 2022-06-03T13:19:54Z | 2022-06-03T13:27:25Z | 2022-06-03T13:27:24Z | severo |
1,259,904,112 | ci: π‘ fix the file extension | null | ci: π‘ fix the file extension: | closed | 2022-06-03T13:16:19Z | 2022-06-03T13:16:24Z | 2022-06-03T13:16:24Z | severo |
1,259,886,500 | Be more explicit about the current docker images | null | Be more explicit about the current docker images: | closed | 2022-06-03T13:00:26Z | 2022-06-03T13:14:22Z | 2022-06-03T13:14:21Z | severo |
1,259,705,972 | Be more explicit about the current docker images | null | Be more explicit about the current docker images: | closed | 2022-06-03T09:55:47Z | 2022-06-03T09:55:55Z | 2022-06-03T09:55:55Z | severo |
1,259,637,301 | ci: π‘ use reusable workflows, and conditional runs on path | This way: all the checks and builds only occur when the corresponding
code has been changed. | ci: π‘ use reusable workflows, and conditional runs on path: This way: all the checks and builds only occur when the corresponding
code has been changed. | closed | 2022-06-03T08:48:10Z | 2022-06-03T08:48:47Z | 2022-06-03T08:48:46Z | severo |
1,259,563,029 | Module cache not available? | See https://github.com/huggingface/datasets/issues/4442 and https://github.com/huggingface/datasets/issues/4441 | Module cache not available?: See https://github.com/huggingface/datasets/issues/4442 and https://github.com/huggingface/datasets/issues/4441 | closed | 2022-06-03T07:39:44Z | 2022-06-07T18:51:01Z | 2022-06-07T18:51:00Z | severo |
1,258,199,630 | fix: π give every servicemonitor its name | they both had the same name. Seen with "kubectl get servicemonitor" | fix: π give every servicemonitor its name: they both had the same name. Seen with "kubectl get servicemonitor" | closed | 2022-06-02T13:36:31Z | 2022-06-02T13:36:38Z | 2022-06-02T13:36:37Z | severo |
1,258,178,971 | Expose admin metrics | null | Expose admin metrics: | closed | 2022-06-02T13:21:06Z | 2022-06-02T13:21:13Z | 2022-06-02T13:21:12Z | severo |
1,258,071,285 | Add metrics endpoint to admin | See #308 | Add metrics endpoint to admin: See #308 | closed | 2022-06-02T11:44:57Z | 2022-06-02T12:06:59Z | 2022-06-02T12:06:59Z | severo |
1,256,423,453 | feat: πΈ update docker image | null | feat: πΈ update docker image: | closed | 2022-06-01T16:23:46Z | 2022-06-01T16:26:06Z | 2022-06-01T16:26:05Z | severo |
1,256,420,888 | feat: πΈ add an index to optimize the distinct query | Weirdly, the index wasn't necessary locally, and was in production (not
sure why). Maybe related to https://jira.mongodb.org/browse/SERVER-19507
and
https://stackoverflow.com/questions/36006208/mongodb-distinct-with-query-doesnt-use-indexes. | feat: πΈ add an index to optimize the distinct query: Weirdly, the index wasn't necessary locally, and was in production (not
sure why). Maybe related to https://jira.mongodb.org/browse/SERVER-19507
and
https://stackoverflow.com/questions/36006208/mongodb-distinct-with-query-doesnt-use-indexes. | closed | 2022-06-01T16:22:47Z | 2022-06-01T16:22:52Z | 2022-06-01T16:22:52Z | severo |
1,256,230,462 | feat: πΈ update docker image | null | feat: πΈ update docker image: | closed | 2022-06-01T15:02:59Z | 2022-06-01T15:04:36Z | 2022-06-01T15:04:35Z | severo |
1,256,227,255 | feat: πΈ update dependencies to update libcache and libqueue | I forgot to update poetry.lock, which means that the previous version of libcache was used, and https://github.com/huggingface/datasets-server/pull/333 was ignored | feat: πΈ update dependencies to update libcache and libqueue: I forgot to update poetry.lock, which means that the previous version of libcache was used, and https://github.com/huggingface/datasets-server/pull/333 was ignored | closed | 2022-06-01T15:01:37Z | 2022-06-01T15:01:43Z | 2022-06-01T15:01:42Z | severo |
1,256,140,073 | feat: πΈ update api docker image | null | feat: πΈ update api docker image: | closed | 2022-06-01T14:24:50Z | 2022-06-01T14:26:14Z | 2022-06-01T14:26:13Z | severo |
1,256,127,875 | Increase the capacity of the workers to unblock datasets | Currently, some datasets that are too heavy (too much RAM, generally) and crash the workers are listed manually in a blocklist:
https://github.com/huggingface/datasets-server/blob/d8e0ea083e69d7055e37fd5bab3d8c3a72daf66b/infra/charts/datasets-server/env/prod.yaml#L23
In particular, @sanchit-gandhi asked for `LIUM/tedlium` on [Slack](https://huggingface.slack.com/archives/C0311GZ7R6K/p1654092366525679) | Increase the capacity of the workers to unblock datasets: Currently, some datasets that are too heavy (too much RAM, generally) and crash the workers are listed manually in a blocklist:
https://github.com/huggingface/datasets-server/blob/d8e0ea083e69d7055e37fd5bab3d8c3a72daf66b/infra/charts/datasets-server/env/prod.yaml#L23
In particular, @sanchit-gandhi asked for `LIUM/tedlium` on [Slack](https://huggingface.slack.com/archives/C0311GZ7R6K/p1654092366525679) | closed | 2022-06-01T14:19:34Z | 2022-06-08T12:58:38Z | 2022-06-08T08:33:51Z | severo |
1,256,119,181 | fix: π optimize the query to get the list of valid datasets | optimization: avoid getting first all the dataset names from the splits,
and then put them into a set: it's a lot quicker to use distinct to only
get the distinct name from mongo.
fixes https://github.com/huggingface/datasets-server/issues/326
On a dump of the production database, it now takes 75ms
```
0.075 cache.py:456(get_valid_or_stalled_dataset_names)
```
instead of 2 seconds! π₯²
```
2.082 cache.py:456(get_valid_or_stalled_dataset_names)
```
For the record: I use [cProfile](https://towardsdatascience.com/how-to-profile-your-code-in-python-e70c834fad89) to profile the calls | fix: π optimize the query to get the list of valid datasets: optimization: avoid getting first all the dataset names from the splits,
and then put them into a set: it's a lot quicker to use distinct to only
get the distinct name from mongo.
fixes https://github.com/huggingface/datasets-server/issues/326
On a dump of the production database, it now takes 75ms
```
0.075 cache.py:456(get_valid_or_stalled_dataset_names)
```
instead of 2 seconds! π₯²
```
2.082 cache.py:456(get_valid_or_stalled_dataset_names)
```
For the record: I use [cProfile](https://towardsdatascience.com/how-to-profile-your-code-in-python-e70c834fad89) to profile the calls | closed | 2022-06-01T14:15:51Z | 2022-06-01T14:23:44Z | 2022-06-01T14:23:43Z | severo |
1,255,489,184 | Change moonlanding app token? | Should we replace `dataset-preview-backend`with `datasets-server`:
- here: https://github.com/huggingface/moon-landing/blob/f2ee3896cff3aa97aafb3476e190ef6641576b6f/server/models/App.ts#L16
- and here: https://github.com/huggingface/moon-landing/blob/82e71c10ed0b385e55a29f43622874acfc35a9e3/server/test/end_to_end_apps.ts#L243-L271
What are the consequences then? How to do it without too much downtime? | Change moonlanding app token?: Should we replace `dataset-preview-backend`with `datasets-server`:
- here: https://github.com/huggingface/moon-landing/blob/f2ee3896cff3aa97aafb3476e190ef6641576b6f/server/models/App.ts#L16
- and here: https://github.com/huggingface/moon-landing/blob/82e71c10ed0b385e55a29f43622874acfc35a9e3/server/test/end_to_end_apps.ts#L243-L271
What are the consequences then? How to do it without too much downtime? | closed | 2022-06-01T09:29:12Z | 2022-09-19T09:33:33Z | 2022-09-19T09:33:33Z | severo |
1,255,406,352 | feat: πΈ use the tls certificate with two domains | see
https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6 | feat: πΈ use the tls certificate with two domains: see
https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6 | closed | 2022-06-01T08:51:04Z | 2022-06-01T08:51:16Z | 2022-06-01T08:51:15Z | severo |
1,254,068,290 | feat: πΈ update the docker image for api | also: allow the deployments to use different docker image tags, so that
the workers are not redeployed if only the api has changed, for example. | feat: πΈ update the docker image for api: also: allow the deployments to use different docker image tags, so that
the workers are not redeployed if only the api has changed, for example. | closed | 2022-05-31T15:49:21Z | 2022-05-31T15:49:30Z | 2022-05-31T15:49:30Z | severo |
1,254,047,061 | Optimize the query behind /splits | null | Optimize the query behind /splits: | closed | 2022-05-31T15:31:21Z | 2022-05-31T15:52:32Z | 2022-05-31T15:44:21Z | severo |
1,253,960,807 | Respond to datasets-server.huggingface.co | See https://github.com/huggingface/datasets-server/issues/319
It still responds to datasets-server.huggingface.tech too | Respond to datasets-server.huggingface.co: See https://github.com/huggingface/datasets-server/issues/319
It still responds to datasets-server.huggingface.tech too | closed | 2022-05-31T14:32:28Z | 2022-05-31T14:42:14Z | 2022-05-31T14:42:13Z | severo |
1,253,893,518 | Reduce the response time of /splits | /splits takes about 1s to respond
See https://github.com/huggingface/datasets-server/issues/250#issuecomment-1141922073.
Already reported in https://github.com/huggingface/datasets-server/issues/301 | Reduce the response time of /splits: /splits takes about 1s to respond
See https://github.com/huggingface/datasets-server/issues/250#issuecomment-1141922073.
Already reported in https://github.com/huggingface/datasets-server/issues/301 | closed | 2022-05-31T13:44:47Z | 2022-05-31T16:14:51Z | 2022-05-31T16:14:51Z | severo |
1,253,892,859 | Reduce the response time of /valid | /valid takes about 8 seconds to respond
See https://github.com/huggingface/datasets-server/issues/250#issuecomment-1141922073 | Reduce the response time of /valid: /valid takes about 8 seconds to respond
See https://github.com/huggingface/datasets-server/issues/250#issuecomment-1141922073 | closed | 2022-05-31T13:44:16Z | 2022-06-01T16:28:32Z | 2022-06-01T14:23:43Z | severo |
1,253,891,639 | Test if /valid is a blocking request | https://github.com/huggingface/datasets-server/issues/250#issuecomment-1142013300
> > the requests to /valid are very long: do they block the incoming requests?)
> Depends on if your long running query is blocking the GIL or not. If you have async calls, it should be able to switch and take care of other requests, if it's computing something then yeah, probably blocking everything else.
- [ ] find if the long requests like /valid are blocking the concurrent requests
- [ ] if so: fix it | Test if /valid is a blocking request: https://github.com/huggingface/datasets-server/issues/250#issuecomment-1142013300
> > the requests to /valid are very long: do they block the incoming requests?)
> Depends on if your long running query is blocking the GIL or not. If you have async calls, it should be able to switch and take care of other requests, if it's computing something then yeah, probably blocking everything else.
- [ ] find if the long requests like /valid are blocking the concurrent requests
- [ ] if so: fix it | closed | 2022-05-31T13:43:20Z | 2022-09-16T17:39:20Z | 2022-09-16T17:39:20Z | severo |
1,253,882,608 | /assets seems to be regularly unavailable | See the incidents here: https://betteruptime.com/team/14149/incidents?m=691070
It might be fixed by #319 | /assets seems to be regularly unavailable: See the incidents here: https://betteruptime.com/team/14149/incidents?m=691070
It might be fixed by #319 | closed | 2022-05-31T13:36:55Z | 2022-06-02T07:50:16Z | 2022-06-02T07:50:15Z | severo |
1,253,872,292 | Find the best way to manage the libs in the monorepo | The services: admin, api, worker, depend on the libs: libcache, libqueue and libutils.
Note that libcache and libutils themselves also depend on libutils.
Currently, the dependencies are defined using a relative path to a wheel file (obtained after running `poetry build` in the library directory):
https://github.com/huggingface/datasets-server/blob/58d47ea57423d9d74f49eb6ba6eafbeee029a92d/services/api/pyproject.toml#L9
Before, they were defined as a relative path to the library directory
https://github.com/huggingface/datasets-server/blob/14d2d0cf40db68434afc7f9cb9f22d43f649999c/services/api_service/pyproject.toml#L9
See https://python-poetry.org/docs/dependency-specification/#path-dependencies.
### Antecedents
Some antecedents to take into account:
- https://github.com/huggingface/datasets-server/issues/216#issuecomment-1139880637:
> I cannot make it work, since the idea is to avoid using poetry to install, and instead build a wheel and install it: but due to 1. the monorepo structure, with libs/ and services/, which means local relative path that start with ../../libs/, 2. ../ not being supported by pip, 3. git subrepositories not supported by poetry (https://github.com/python-poetry/poetry/pull/5172), I don't know well how to manage it.
- https://github.com/huggingface/datasets-server/pull/314
### Issues with the current solution
One of the issues with the current setup is that if we update one detail in the libutils library for example, we then have to update the file in a lot of places, see https://github.com/huggingface/datasets-server/pull/322/files for example.
Another one is that "Go to definition" on a function in VSCode for example sends to the library files in the virtual environment, not to the upstream files (in libs/).
### Alternatives
An alternative would be to:
- publish the libraries to a public repo, like pypi. It would be easier with #320.
- depend on the remote repo, with a [caret requirement](https://python-poetry.org/docs/dependency-specification/#caret-requirements) so that we could update easily. | Find the best way to manage the libs in the monorepo: The services: admin, api, worker, depend on the libs: libcache, libqueue and libutils.
Note that libcache and libutils themselves also depend on libutils.
Currently, the dependencies are defined using a relative path to a wheel file (obtained after running `poetry build` in the library directory):
https://github.com/huggingface/datasets-server/blob/58d47ea57423d9d74f49eb6ba6eafbeee029a92d/services/api/pyproject.toml#L9
Before, they were defined as a relative path to the library directory
https://github.com/huggingface/datasets-server/blob/14d2d0cf40db68434afc7f9cb9f22d43f649999c/services/api_service/pyproject.toml#L9
See https://python-poetry.org/docs/dependency-specification/#path-dependencies.
### Antecedents
Some antecedents to take into account:
- https://github.com/huggingface/datasets-server/issues/216#issuecomment-1139880637:
> I cannot make it work, since the idea is to avoid using poetry to install, and instead build a wheel and install it: but due to 1. the monorepo structure, with libs/ and services/, which means local relative path that start with ../../libs/, 2. ../ not being supported by pip, 3. git subrepositories not supported by poetry (https://github.com/python-poetry/poetry/pull/5172), I don't know well how to manage it.
- https://github.com/huggingface/datasets-server/pull/314
### Issues with the current solution
One of the issues with the current setup is that if we update one detail in the libutils library for example, we then have to update the file in a lot of places, see https://github.com/huggingface/datasets-server/pull/322/files for example.
Another one is that "Go to definition" on a function in VSCode for example sends to the library files in the virtual environment, not to the upstream files (in libs/).
### Alternatives
An alternative would be to:
- publish the libraries to a public repo, like pypi. It would be easier with #320.
- depend on the remote repo, with a [caret requirement](https://python-poetry.org/docs/dependency-specification/#caret-requirements) so that we could update easily. | closed | 2022-05-31T13:29:02Z | 2022-09-19T09:07:19Z | 2022-09-19T09:07:18Z | severo |
1,253,863,718 | feat: πΈ upgrade dependencies | it removes most of the "safety" warnings. Only remains "pillow" in the
worker. | feat: πΈ upgrade dependencies: it removes most of the "safety" warnings. Only remains "pillow" in the
worker. | closed | 2022-05-31T13:22:36Z | 2022-05-31T13:38:03Z | 2022-05-31T13:38:02Z | severo |
1,253,759,498 | feat: πΈ adapt the value of resources based on monitoring | See
https://grafana.huggingface.tech/d/a164a7f0339f99e89cea5cb47e9be617/kubernetes-compute-resources-workload?orgId=1&refresh=10s&var-datasource=Prometheus%20EKS%20Hub%20Prod&var-cluster=&var-namespace=datasets-server&var-type=deployment&var-workload=datasets-server-prod-datasets-worker&from=now-24h&to=now
for the metrics about RAM and CPU for the different deployments | feat: πΈ adapt the value of resources based on monitoring: See
https://grafana.huggingface.tech/d/a164a7f0339f99e89cea5cb47e9be617/kubernetes-compute-resources-workload?orgId=1&refresh=10s&var-datasource=Prometheus%20EKS%20Hub%20Prod&var-cluster=&var-namespace=datasets-server&var-type=deployment&var-workload=datasets-server-prod-datasets-worker&from=now-24h&to=now
for the metrics about RAM and CPU for the different deployments | closed | 2022-05-31T11:57:53Z | 2022-05-31T13:23:25Z | 2022-05-31T12:24:29Z | severo |
1,252,694,347 | Open source this repo | This repo could be open-sourced: it might be useful for people, and the development could benefit from external contributors. | Open source this repo: This repo could be open-sourced: it might be useful for people, and the development could benefit from external contributors. | closed | 2022-05-30T12:48:20Z | 2022-09-23T17:17:35Z | 2022-09-23T17:17:34Z | severo |
1,252,689,399 | Change domain to datasets-server.huggingface.co | - [x] create the domain and point to the load balancer. See https://github.com/huggingface/infra/pull/205 and https://github.com/huggingface/infra/pull/209. Check at https://www.whatsmydns.net/#A/datasets-server.huggingface.co
- [x] use the domain in the ingress: see #328
- [x] change the TLS certificate to accept both domains. The TLS certificates must be created manually at https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/list.
- [x] created at https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6.
- [x] use the certificate in the ingress: https://github.com/huggingface/datasets-server/pull/331
- [x] change in betteruptime.com (also created the "Datasets server" escalation policy, using the same model as the "Tensorboard" one)
- [x] point directly to the new domain from moon-landing: see https://github.com/huggingface/moon-landing/pull/3119
- [x] to get the config, splits and rows: https://github.com/huggingface/moon-landing/blob/ef4d36bc7e7da25f3c873c324e0bcabffca8e4f9/server/.env.production.example#L40
- [x] to get the assets: https://github.com/huggingface/moon-landing/blob/ef4d36bc7e7da25f3c873c324e0bcabffca8e4f9/server/.env.production.example#L41
- [x] We still get queries to the old domain from the hub [kibana](https://kibana.elastic.huggingface.tech/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(columns:!(),filters:!(),index:'24d42510-a44e-11ec-bb45-ad141ad1c5f8',interval:auto,query:(language:kuery,query:'kubernetes.container.name%20:%20%22datasets-server-reverse-proxy%22%20and%20message%20:%20%22datasets-server.huggingface.tech%2Fvalid%22'),sort:!(!('@timestamp',desc)))) - the PR ephemeral environments are still using the old domain -> [nevermind](https://github.com/huggingface/datasets-server/issues/319#issuecomment-1147145154)
- [x] change in autonlp-ui: https://github.com/huggingface/autonlp-ui/blob/ac85c029a58fdd25c8f810cbba8a3d8ecbba6181/src/lib/config.ts#L58. See https://github.com/huggingface/autonlp-ui/pull/270
- [x] change it in the Hub webhook, asked on [Slack](https://huggingface.slack.com/archives/C023JAKTR2P/p1654175138985519)
- [x] remove the domain datasets-server.huggingface.tech from the current project (https://github.com/huggingface/datasets-server) - see https://github.com/huggingface/datasets-server/pull/351
- [x] remove the proxy: https://github.com/huggingface/conf/blob/bd698a91c615938b52477c25d72ba84d10af4c68/moonrise/nginx-moonrise.conf#L321-L328 - see https://github.com/huggingface/conf/pull/173
Should we apply these two tasks, or keep the domain + certificate for further use (admin)? (edit: from @XciD : yes, let's manage everything on the same domain)
- [x] revoke the certificate https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/321c89f0-1267-4519-b3cf-547738e3340e
- [x] remove the domain datasets-server.huggingface.tech from https://github.com/huggingface/infra -> see https://github.com/huggingface/infra/pull/228
- [x] replace the certificate https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6 with a new one without datasets-server.huggingface.tech in the alternate names
- [x] certificate has been created: https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/777e3ae5-0c54-47ee-9b8c-d85eeb6ec4ae
- [x] use the new certificate - https://github.com/huggingface/datasets-server/pull/360
- [x] delete the old certificate https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6 | Change domain to datasets-server.huggingface.co: - [x] create the domain and point to the load balancer. See https://github.com/huggingface/infra/pull/205 and https://github.com/huggingface/infra/pull/209. Check at https://www.whatsmydns.net/#A/datasets-server.huggingface.co
- [x] use the domain in the ingress: see #328
- [x] change the TLS certificate to accept both domains. The TLS certificates must be created manually at https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/list.
- [x] created at https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6.
- [x] use the certificate in the ingress: https://github.com/huggingface/datasets-server/pull/331
- [x] change in betteruptime.com (also created the "Datasets server" escalation policy, using the same model as the "Tensorboard" one)
- [x] point directly to the new domain from moon-landing: see https://github.com/huggingface/moon-landing/pull/3119
- [x] to get the config, splits and rows: https://github.com/huggingface/moon-landing/blob/ef4d36bc7e7da25f3c873c324e0bcabffca8e4f9/server/.env.production.example#L40
- [x] to get the assets: https://github.com/huggingface/moon-landing/blob/ef4d36bc7e7da25f3c873c324e0bcabffca8e4f9/server/.env.production.example#L41
- [x] We still get queries to the old domain from the hub [kibana](https://kibana.elastic.huggingface.tech/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(columns:!(),filters:!(),index:'24d42510-a44e-11ec-bb45-ad141ad1c5f8',interval:auto,query:(language:kuery,query:'kubernetes.container.name%20:%20%22datasets-server-reverse-proxy%22%20and%20message%20:%20%22datasets-server.huggingface.tech%2Fvalid%22'),sort:!(!('@timestamp',desc)))) - the PR ephemeral environments are still using the old domain -> [nevermind](https://github.com/huggingface/datasets-server/issues/319#issuecomment-1147145154)
- [x] change in autonlp-ui: https://github.com/huggingface/autonlp-ui/blob/ac85c029a58fdd25c8f810cbba8a3d8ecbba6181/src/lib/config.ts#L58. See https://github.com/huggingface/autonlp-ui/pull/270
- [x] change it in the Hub webhook, asked on [Slack](https://huggingface.slack.com/archives/C023JAKTR2P/p1654175138985519)
- [x] remove the domain datasets-server.huggingface.tech from the current project (https://github.com/huggingface/datasets-server) - see https://github.com/huggingface/datasets-server/pull/351
- [x] remove the proxy: https://github.com/huggingface/conf/blob/bd698a91c615938b52477c25d72ba84d10af4c68/moonrise/nginx-moonrise.conf#L321-L328 - see https://github.com/huggingface/conf/pull/173
Should we apply these two tasks, or keep the domain + certificate for further use (admin)? (edit: from @XciD : yes, let's manage everything on the same domain)
- [x] revoke the certificate https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/321c89f0-1267-4519-b3cf-547738e3340e
- [x] remove the domain datasets-server.huggingface.tech from https://github.com/huggingface/infra -> see https://github.com/huggingface/infra/pull/228
- [x] replace the certificate https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6 with a new one without datasets-server.huggingface.tech in the alternate names
- [x] certificate has been created: https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/777e3ae5-0c54-47ee-9b8c-d85eeb6ec4ae
- [x] use the new certificate - https://github.com/huggingface/datasets-server/pull/360
- [x] delete the old certificate https://us-east-1.console.aws.amazon.com/acm/home?region=us-east-1#/certificates/bfcad79a-111b-4852-adc2-5d78f4132eb6 | closed | 2022-05-30T12:43:53Z | 2022-06-09T12:22:52Z | 2022-06-09T12:22:52Z | severo |
1,252,560,230 | Cannot get images from mnist | On https://huggingface.co/datasets/mnist, the images do not appear:
<img width="499" alt="Capture dβeΜcran 2022-05-30 aΜ 12 45 19" src="https://user-images.githubusercontent.com/1676121/170976397-cc283c45-afb5-43fa-9170-31689867deb7.png">
And the requests to the images return 403 or 404:
<img width="929" alt="Capture dβeΜcran 2022-05-30 aΜ 12 45 25" src="https://user-images.githubusercontent.com/1676121/170976411-43f7f348-2178-47e8-bd4f-71a415e25e22.png">
Their URLs are like:
https://huggingface.co/proxy-datasets-preview/assets/mnist/--/mnist/train/91/image/image.jpg
^does not work
Which should proxy to upstream URL:
https://datasets-server.huggingface.tech/assets/mnist/--/mnist/train/91/image/image.jpg
^works
See the nginx configuration: https://github.com/huggingface/conf/blob/bd698a91c615938b52477c25d72ba84d10af4c68/moonrise/nginx-moonrise.conf#L321-L328
Looking at the nginx logs on moonrise (`sudo grep proxy-datasets-preview /var/log/nginx/error.log`) we get a lot of `Connection timed out` errors:
```
2022/05/30 12:41:41 [error] 687523#687523: *867222115 upstream timed out (110: Connection timed out) while connecting to upstream, client: 172.30.1.33, server: huggingface.co, request: "GET /proxy-datasets-preview/assets/mnist/--/mnist/train/91/image/image.jpg HTTP/1.1", upstream: "https://35.175.164.194:443/assets/mnist/--/mnist/train/91/image/image.jpg", host: "huggingface.co"
```
This means that moonrise does not seem able to access the datasets-server.huggingface.co server.
Launching curl from the moonrise server with the domain works:
```
curl https://datasets-server.huggingface.tech/assets/mnist/--/mnist/train/91/image/image.jpg > image.jpg
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 561 100 561 0 0 31166 0 --:--:-- --:--:-- --:--:-- 31166
```
But not with the IP reported in the logs (it timeouts):
```
hf@moonrise:/tmp$ curl https://35.175.164.194:443/assets/mnist/--/mnist/train/91/image/image.jpg > image.jpg
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:01:10 --:--:-- 0
```
The IP resolved for datasets-server.huggingface.tech:
```
hf@moonrise:/tmp$ dig datasets-server.huggingface.tech
; <<>> DiG 9.16.1-Ubuntu <<>> datasets-server.huggingface.tech
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48677
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;datasets-server.huggingface.tech. IN A
;; ANSWER SECTION:
datasets-server.huggingface.tech. 13 IN A 34.194.63.218
datasets-server.huggingface.tech. 13 IN A 52.204.14.32
datasets-server.huggingface.tech. 13 IN A 184.72.186.69
datasets-server.huggingface.tech. 13 IN A 50.16.88.70
datasets-server.huggingface.tech. 13 IN A 34.236.116.183
datasets-server.huggingface.tech. 13 IN A 34.239.243.182
;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Mon May 30 12:52:46 CEST 2022
;; MSG SIZE rcvd: 157
``` | Cannot get images from mnist: On https://huggingface.co/datasets/mnist, the images do not appear:
<img width="499" alt="Capture dβeΜcran 2022-05-30 aΜ 12 45 19" src="https://user-images.githubusercontent.com/1676121/170976397-cc283c45-afb5-43fa-9170-31689867deb7.png">
And the requests to the images return 403 or 404:
<img width="929" alt="Capture dβeΜcran 2022-05-30 aΜ 12 45 25" src="https://user-images.githubusercontent.com/1676121/170976411-43f7f348-2178-47e8-bd4f-71a415e25e22.png">
Their URLs are like:
https://huggingface.co/proxy-datasets-preview/assets/mnist/--/mnist/train/91/image/image.jpg
^does not work
Which should proxy to upstream URL:
https://datasets-server.huggingface.tech/assets/mnist/--/mnist/train/91/image/image.jpg
^works
See the nginx configuration: https://github.com/huggingface/conf/blob/bd698a91c615938b52477c25d72ba84d10af4c68/moonrise/nginx-moonrise.conf#L321-L328
Looking at the nginx logs on moonrise (`sudo grep proxy-datasets-preview /var/log/nginx/error.log`) we get a lot of `Connection timed out` errors:
```
2022/05/30 12:41:41 [error] 687523#687523: *867222115 upstream timed out (110: Connection timed out) while connecting to upstream, client: 172.30.1.33, server: huggingface.co, request: "GET /proxy-datasets-preview/assets/mnist/--/mnist/train/91/image/image.jpg HTTP/1.1", upstream: "https://35.175.164.194:443/assets/mnist/--/mnist/train/91/image/image.jpg", host: "huggingface.co"
```
This means that moonrise does not seem able to access the datasets-server.huggingface.co server.
Launching curl from the moonrise server with the domain works:
```
curl https://datasets-server.huggingface.tech/assets/mnist/--/mnist/train/91/image/image.jpg > image.jpg
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 561 100 561 0 0 31166 0 --:--:-- --:--:-- --:--:-- 31166
```
But not with the IP reported in the logs (it timeouts):
```
hf@moonrise:/tmp$ curl https://35.175.164.194:443/assets/mnist/--/mnist/train/91/image/image.jpg > image.jpg
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:01:10 --:--:-- 0
```
The IP resolved for datasets-server.huggingface.tech:
```
hf@moonrise:/tmp$ dig datasets-server.huggingface.tech
; <<>> DiG 9.16.1-Ubuntu <<>> datasets-server.huggingface.tech
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48677
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;datasets-server.huggingface.tech. IN A
;; ANSWER SECTION:
datasets-server.huggingface.tech. 13 IN A 34.194.63.218
datasets-server.huggingface.tech. 13 IN A 52.204.14.32
datasets-server.huggingface.tech. 13 IN A 184.72.186.69
datasets-server.huggingface.tech. 13 IN A 50.16.88.70
datasets-server.huggingface.tech. 13 IN A 34.236.116.183
datasets-server.huggingface.tech. 13 IN A 34.239.243.182
;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Mon May 30 12:52:46 CEST 2022
;; MSG SIZE rcvd: 157
``` | closed | 2022-05-30T10:52:58Z | 2022-05-30T12:53:18Z | 2022-05-30T12:53:18Z | severo |
1,252,533,762 | feat: πΈ use only one uvicorn worker per api pod | This way: /metrics gives adequate metrics about the starlette app
(requests), since it does not depend on the specific uvicorn worker
responding the request. See
https://github.com/huggingface/datasets-server/issues/250#issuecomment-1136328511 | feat: πΈ use only one uvicorn worker per api pod: This way: /metrics gives adequate metrics about the starlette app
(requests), since it does not depend on the specific uvicorn worker
responding the request. See
https://github.com/huggingface/datasets-server/issues/250#issuecomment-1136328511 | closed | 2022-05-30T10:30:33Z | 2022-05-30T10:30:44Z | 2022-05-30T10:30:43Z | severo |
1,250,952,683 | ci: π‘ launch e2e after docker build, and use the images | this way we don't build the images twice, and we really test the docker
images that will be used in production | ci: π‘ launch e2e after docker build, and use the images: this way we don't build the images twice, and we really test the docker
images that will be used in production | closed | 2022-05-27T16:26:50Z | 2022-05-27T16:51:19Z | 2022-05-27T16:50:20Z | severo |
1,250,905,794 | feat: πΈ build the local libraries | This way, we refer to a specific version, with a related hash, which
help for the cache entries (in github actions) | feat: πΈ build the local libraries: This way, we refer to a specific version, with a related hash, which
help for the cache entries (in github actions) | closed | 2022-05-27T15:40:23Z | 2022-05-27T16:03:27Z | 2022-05-27T16:03:26Z | severo |
1,248,387,169 | ci: π‘ use cache with poetry | null | ci: π‘ use cache with poetry: | closed | 2022-05-25T16:43:10Z | 2022-05-27T16:03:54Z | 2022-05-27T16:03:54Z | severo |
1,248,313,104 | ci: π‘ use cache (gha) when building the docker images | Hopefully it will fit in the 10GB.
See
https://github.com/docker/build-push-action/blob/master/docs/advanced/cache.md#cache-backend-api
and https://github.com/moby/buildkit/tree/master#github-actions-cache-experimental | ci: π‘ use cache (gha) when building the docker images: Hopefully it will fit in the 10GB.
See
https://github.com/docker/build-push-action/blob/master/docs/advanced/cache.md#cache-backend-api
and https://github.com/moby/buildkit/tree/master#github-actions-cache-experimental | closed | 2022-05-25T15:51:00Z | 2022-05-25T16:41:33Z | 2022-05-25T16:41:32Z | severo |
1,248,107,377 | A lot of images in datasets viewer get 404 | See https://huggingface.co/datasets/julien-c/impressionists
<img width="575" alt="Capture dβeΜcran 2022-05-25 aΜ 15 33 07" src="https://user-images.githubusercontent.com/1676121/170274551-42c6b4d2-df05-4e75-aa93-f6f4eb2fd1cb.png">
<img width="935" alt="Capture dβeΜcran 2022-05-25 aΜ 15 33 34" src="https://user-images.githubusercontent.com/1676121/170274662-b824ece0-cdaf-4dac-893b-054af12af7ea.png">
But the images exist
<img width="1290" alt="Capture dβeΜcran 2022-05-25 aΜ 15 33 45" src="https://user-images.githubusercontent.com/1676121/170274666-11304c41-0feb-4a3c-9229-bfeba41e7c93.png">
| A lot of images in datasets viewer get 404: See https://huggingface.co/datasets/julien-c/impressionists
<img width="575" alt="Capture dβeΜcran 2022-05-25 aΜ 15 33 07" src="https://user-images.githubusercontent.com/1676121/170274551-42c6b4d2-df05-4e75-aa93-f6f4eb2fd1cb.png">
<img width="935" alt="Capture dβeΜcran 2022-05-25 aΜ 15 33 34" src="https://user-images.githubusercontent.com/1676121/170274662-b824ece0-cdaf-4dac-893b-054af12af7ea.png">
But the images exist
<img width="1290" alt="Capture dβeΜcran 2022-05-25 aΜ 15 33 45" src="https://user-images.githubusercontent.com/1676121/170274666-11304c41-0feb-4a3c-9229-bfeba41e7c93.png">
| closed | 2022-05-25T13:34:10Z | 2022-06-02T07:49:51Z | 2022-06-02T07:49:51Z | severo |
1,247,857,708 | Use elastic search to debug issues | https://kibana.elastic.huggingface.tech/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:'2022-05-25T09:39:00.000Z',to:'2022-05-25T09:39:30.000Z'))&_a=(columns:!(),filters:!(),index:'24d42510-a44e-11ec-bb45-ad141ad1c5f8',interval:auto,query:(language:kuery,query:'kubernetes.container.name%20:%20%22datasets-server-datasets-worker%22%20'),sort:!(!('@timestamp',desc))) | Use elastic search to debug issues: https://kibana.elastic.huggingface.tech/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:'2022-05-25T09:39:00.000Z',to:'2022-05-25T09:39:30.000Z'))&_a=(columns:!(),filters:!(),index:'24d42510-a44e-11ec-bb45-ad141ad1c5f8',interval:auto,query:(language:kuery,query:'kubernetes.container.name%20:%20%22datasets-server-datasets-worker%22%20'),sort:!(!('@timestamp',desc))) | closed | 2022-05-25T10:00:57Z | 2022-09-16T17:40:05Z | 2022-09-16T17:40:05Z | severo |
1,247,857,015 | Create custom grafana dashboards | - per deployment (api / worker / admin / reverse proxy)
- with: the resources (cpu/ram/network) and the specific aspects of the deployment (requests/cache/queue) | Create custom grafana dashboards: - per deployment (api / worker / admin / reverse proxy)
- with: the resources (cpu/ram/network) and the specific aspects of the deployment (requests/cache/queue) | closed | 2022-05-25T10:00:24Z | 2022-09-16T17:40:42Z | 2022-09-16T17:40:42Z | severo |
1,247,851,932 | Scale the worker pods depending on prometheus metrics? | We could scale the number of worker pods depending on:
- the size of the job queue
- the available resources
These data are available in prometheus, and we could use them to autoscale the pods. | Scale the worker pods depending on prometheus metrics?: We could scale the number of worker pods depending on:
- the size of the job queue
- the available resources
These data are available in prometheus, and we could use them to autoscale the pods. | closed | 2022-05-25T09:56:05Z | 2022-09-19T09:30:49Z | 2022-09-19T09:30:49Z | severo |
1,247,850,106 | Add a /metrics endpoint on the admin pod | It will be useful:
1. to separate admin queries from the normal requests
2. to do only one request to the database (to get the cache and queue stats), since there is only 1 admin pod, against one request per pod on the `api` container
(maybe it would be a "datasets-server-prometheus-exporter") | Add a /metrics endpoint on the admin pod: It will be useful:
1. to separate admin queries from the normal requests
2. to do only one request to the database (to get the cache and queue stats), since there is only 1 admin pod, against one request per pod on the `api` container
(maybe it would be a "datasets-server-prometheus-exporter") | closed | 2022-05-25T09:54:32Z | 2022-06-02T13:21:33Z | 2022-06-02T13:21:33Z | severo |
1,247,847,689 | Add a /metrics endpoint on every worker? | null | Add a /metrics endpoint on every worker?: | closed | 2022-05-25T09:52:28Z | 2022-09-16T17:40:55Z | 2022-09-16T17:40:55Z | severo |
1,247,847,116 | Add a /metrics endpoint to the reverse proxy | See https://github.com/nginxinc/nginx-prometheus-exporter | Add a /metrics endpoint to the reverse proxy: See https://github.com/nginxinc/nginx-prometheus-exporter | closed | 2022-05-25T09:51:59Z | 2022-09-16T17:41:01Z | 2022-09-16T17:41:01Z | severo |
1,247,003,057 | feat: πΈ block two datasets | null | feat: πΈ block two datasets: | closed | 2022-05-24T18:57:51Z | 2022-05-24T18:57:57Z | 2022-05-24T18:57:57Z | severo |
1,246,998,660 | perf: β‘οΈ increase the number of replicas for the API | and add 1 uvicorn worker to follow recommendations | perf: β‘οΈ increase the number of replicas for the API: and add 1 uvicorn worker to follow recommendations | closed | 2022-05-24T18:53:30Z | 2022-05-24T18:53:35Z | 2022-05-24T18:53:35Z | severo |
1,246,994,411 | feat: πΈ update the docker images | null | feat: πΈ update the docker images: | closed | 2022-05-24T18:48:56Z | 2022-05-24T18:49:03Z | 2022-05-24T18:49:03Z | severo |
1,246,986,270 | Review the nginx configuration <> uvicorn | See http://www.uvicorn.org/deployment/#running-behind-nginx.
The current nginx configuration is https://github.com/huggingface/datasets-server/blob/main/infra/charts/datasets-server/nginx-templates/default.conf.template
| Review the nginx configuration <> uvicorn: See http://www.uvicorn.org/deployment/#running-behind-nginx.
The current nginx configuration is https://github.com/huggingface/datasets-server/blob/main/infra/charts/datasets-server/nginx-templates/default.conf.template
| closed | 2022-05-24T18:41:16Z | 2022-09-16T17:41:05Z | 2022-09-16T17:41:05Z | severo |
1,246,565,499 | Request on /splits takes too long on red_caps | reported by @mariosasko (thanks)
https://huggingface.co/datasets/red_caps showed "server-side error", which generally means a timeout from the node code that fetches the data
Indeed, while not in the reverse-proxy cache, the response to http://datasets-server.huggingface.tech/splits?dataset=red_caps takes too long (a lot more than 1.5s for sure, maybe 20s--I didn't measure the time). Note that this dataset has... 1741 splits (1741 configs, each one with the `train` split)! Nevermind: it should not take that long to generate.
We have to optimize the time taken by this request, and add tests to ensure we always are able to serve the response in a short time.
Related to #4 | Request on /splits takes too long on red_caps: reported by @mariosasko (thanks)
https://huggingface.co/datasets/red_caps showed "server-side error", which generally means a timeout from the node code that fetches the data
Indeed, while not in the reverse-proxy cache, the response to http://datasets-server.huggingface.tech/splits?dataset=red_caps takes too long (a lot more than 1.5s for sure, maybe 20s--I didn't measure the time). Note that this dataset has... 1741 splits (1741 configs, each one with the `train` split)! Nevermind: it should not take that long to generate.
We have to optimize the time taken by this request, and add tests to ensure we always are able to serve the response in a short time.
Related to #4 | closed | 2022-05-24T13:33:42Z | 2022-05-31T16:11:52Z | 2022-05-31T16:11:52Z | severo |
1,246,242,016 | fix: π disable cache and queue metrics for now | null | fix: π disable cache and queue metrics for now: | closed | 2022-05-24T09:11:38Z | 2022-05-24T09:11:44Z | 2022-05-24T09:11:43Z | severo |
1,245,392,540 | feat: πΈ update docker images | They add metrics about the cache, the queue, and the starlette requests
to the metrics/ endpoint | feat: πΈ update docker images: They add metrics about the cache, the queue, and the starlette requests
to the metrics/ endpoint | closed | 2022-05-23T15:57:18Z | 2022-05-23T16:06:32Z | 2022-05-23T16:06:31Z | severo |
1,245,385,647 | Reenable metrics | null | Reenable metrics: | closed | 2022-05-23T15:52:22Z | 2022-05-23T15:52:28Z | 2022-05-23T15:52:27Z | severo |
1,245,337,775 | Notes on technologies to store, query and/or process the datasets | Just a list, for now:
- https://spark.apache.org/
- https://dask.org/
- https://www.elastic.co/
- https://www.mongodb.com/
- https://duckdb.org/
- https://www.ray.io/
- https://arrow.apache.org/docs/python/parquet.html | Notes on technologies to store, query and/or process the datasets: Just a list, for now:
- https://spark.apache.org/
- https://dask.org/
- https://www.elastic.co/
- https://www.mongodb.com/
- https://duckdb.org/
- https://www.ray.io/
- https://arrow.apache.org/docs/python/parquet.html | closed | 2022-05-23T15:20:22Z | 2022-09-19T09:00:21Z | 2022-09-19T09:00:20Z | severo |
1,245,018,356 | feat: πΈ update docker images | Adds mongo indexes to the collections | feat: πΈ update docker images: Adds mongo indexes to the collections | closed | 2022-05-23T11:27:12Z | 2022-05-23T11:27:21Z | 2022-05-23T11:27:20Z | severo |
1,245,016,202 | feat: πΈ add indexes in mongo | null | feat: πΈ add indexes in mongo: | closed | 2022-05-23T11:25:23Z | 2022-05-23T11:25:43Z | 2022-05-23T11:25:42Z | severo |
1,244,885,362 | Update docker images | null | Update docker images: | closed | 2022-05-23T09:42:28Z | 2022-05-23T09:42:35Z | 2022-05-23T09:42:34Z | severo |
1,244,876,973 | Monitor the cost of running the datasets-server | null | Monitor the cost of running the datasets-server: | closed | 2022-05-23T09:35:56Z | 2022-09-16T17:41:21Z | 2022-09-16T17:41:21Z | severo |
1,244,868,145 | Fix valid endpoint query | null | Fix valid endpoint query: | closed | 2022-05-23T09:29:22Z | 2022-05-23T09:29:43Z | 2022-05-23T09:29:42Z | severo |
1,244,789,564 | feat: πΈ update docker images | also: add a dataset to the block list | feat: πΈ update docker images: also: add a dataset to the block list | closed | 2022-05-23T08:40:11Z | 2022-05-23T08:40:18Z | 2022-05-23T08:40:17Z | severo |
1,244,769,219 | feat: πΈ upgrade datasets to 2.2.2 (and minor upgrades) | null | feat: πΈ upgrade datasets to 2.2.2 (and minor upgrades): | closed | 2022-05-23T08:23:11Z | 2022-05-23T08:23:16Z | 2022-05-23T08:23:16Z | severo |
1,244,755,426 | fix: π increase resources for api, and block big datasets | also: format | fix: π increase resources for api, and block big datasets: also: format | closed | 2022-05-23T08:15:09Z | 2022-05-23T08:15:17Z | 2022-05-23T08:15:16Z | severo |
1,242,883,051 | debug the memory+cpu usage of python applications | ### CPU
For local tests:
- https://github.com/benfred/py-spy (to generate flame graphs)
- https://crates.io/crates/oha (to send traffic)
For tests on the prod infra:
- https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852: to generate flame graphs
related to #2 | debug the memory+cpu usage of python applications: ### CPU
For local tests:
- https://github.com/benfred/py-spy (to generate flame graphs)
- https://crates.io/crates/oha (to send traffic)
For tests on the prod infra:
- https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852: to generate flame graphs
related to #2 | closed | 2022-05-20T09:32:35Z | 2022-09-16T17:41:31Z | 2022-09-16T17:41:30Z | severo |
1,242,829,699 | perf: β‘οΈ reduce the number of workers | also: use only one API pod (seems like the other one is never called...
to investigate).
Note: the healthcheck was timeouting (the reverse proxy replied, but the
api was not responding quickly enough). See
https://betteruptime.com/team/14149/incidents/229868663. | perf: β‘οΈ reduce the number of workers: also: use only one API pod (seems like the other one is never called...
to investigate).
Note: the healthcheck was timeouting (the reverse proxy replied, but the
api was not responding quickly enough). See
https://betteruptime.com/team/14149/incidents/229868663. | closed | 2022-05-20T08:43:58Z | 2022-05-20T08:44:09Z | 2022-05-20T08:44:08Z | severo |
1,242,719,391 | "The dataset does not exist" error | From [Slack](https://huggingface.slack.com/archives/CUTRZ7YJ0/p1652995016789019)
> Hello, Iβm facing an issue with the dataset preview feature. I uploaded a dataset here: https://huggingface.co/datasets/allenai/wmt22_african, and added a loading script. I have also confirmed that loading the dataset works with `streaming=True`. However, the dataset previewβs error message says that βThe dataset does not exist.β. (edited)
>
> It looks like the dataset viewer is now visible. Since I didnβt change anything, Iβm assuming it was fixed on your end. Thanks!
<img width="1128" alt="Capture dβeΜcran 2022-05-20 aΜ 09 04 05" src="https://user-images.githubusercontent.com/1676121/169472265-4b818b7b-2d09-4065-9a54-088ae0d8e7f2.png">
We should improve the different states, and not show this kind of error message if it's obvious that the dataset exists. Error messages should be reserved for errors with the dataset script, not for normal states (cache not filled yet) | "The dataset does not exist" error: From [Slack](https://huggingface.slack.com/archives/CUTRZ7YJ0/p1652995016789019)
> Hello, Iβm facing an issue with the dataset preview feature. I uploaded a dataset here: https://huggingface.co/datasets/allenai/wmt22_african, and added a loading script. I have also confirmed that loading the dataset works with `streaming=True`. However, the dataset previewβs error message says that βThe dataset does not exist.β. (edited)
>
> It looks like the dataset viewer is now visible. Since I didnβt change anything, Iβm assuming it was fixed on your end. Thanks!
<img width="1128" alt="Capture dβeΜcran 2022-05-20 aΜ 09 04 05" src="https://user-images.githubusercontent.com/1676121/169472265-4b818b7b-2d09-4065-9a54-088ae0d8e7f2.png">
We should improve the different states, and not show this kind of error message if it's obvious that the dataset exists. Error messages should be reserved for errors with the dataset script, not for normal states (cache not filled yet) | closed | 2022-05-20T07:05:26Z | 2022-06-08T08:43:30Z | 2022-06-08T08:43:30Z | severo |
1,240,209,913 | feat: πΈ update prod values | block more datasets, increase the memory limits for the API, add a
memory limit to the workers | feat: πΈ update prod values: block more datasets, increase the memory limits for the API, add a
memory limit to the workers | closed | 2022-05-18T16:28:35Z | 2022-05-18T16:28:41Z | 2022-05-18T16:28:41Z | severo |
1,238,725,987 | test: π fix test | null | test: π fix test: | closed | 2022-05-17T14:25:20Z | 2022-05-17T14:25:26Z | 2022-05-17T14:25:25Z | severo |
1,238,702,892 | feat: πΈ upgrade images | null | feat: πΈ upgrade images: | closed | 2022-05-17T14:09:31Z | 2022-05-17T14:09:38Z | 2022-05-17T14:09:37Z | severo |
1,238,701,260 | fix: π disable the metrics about cache and queue | because it makes several queries to the database on every call to
/metrics (which is every second) | fix: π disable the metrics about cache and queue: because it makes several queries to the database on every call to
/metrics (which is every second) | closed | 2022-05-17T14:08:18Z | 2022-05-17T14:08:25Z | 2022-05-17T14:08:24Z | severo |
1,238,667,511 | feat: πΈ upgrade images | null | feat: πΈ upgrade images: | closed | 2022-05-17T13:44:06Z | 2022-05-17T13:56:07Z | 2022-05-17T13:56:06Z | severo |
1,238,665,543 | Fix ram in prod | null | Fix ram in prod: | closed | 2022-05-17T13:42:44Z | 2022-05-17T13:42:50Z | 2022-05-17T13:42:49Z | severo |
1,238,650,908 | The API is unavailable in production | ```
k logs datasets-server-prod-api-6c9f9d5cc-6h52g -f
```
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/src/services/api/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/src/services/api/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__
return await self.app(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__
await responder(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/base.py", line 57, in __call__
task_group.cancel_scope.cancel()
File "/src/services/api/.venv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/base.py", line 30, in coro
await self.app(scope, request.receive, send_stream.send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__
await route.handle(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle
await self.app(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/routing.py", line 64, in app
await response(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/responses.py", line 139, in __call__
await send({"type": "http.response.body", "body": self.body})
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 68, in sender
await send(message)
File "/src/services/api/.venv/lib/python3.9/site-packages/anyio/streams/memory.py", line 221, in send
raise BrokenResourceError
anyio.BrokenResourceError
```
It seems like a known error: https://github.com/tiangolo/fastapi/issues/4041.
Possibly due to a middleware (maybe `PrometheusMiddleware` here: https://github.com/huggingface/datasets-server/blob/main/services/api/src/api/app.py#L48) | The API is unavailable in production: ```
k logs datasets-server-prod-api-6c9f9d5cc-6h52g -f
```
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/src/services/api/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/src/services/api/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__
return await self.app(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__
await responder(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/base.py", line 57, in __call__
task_group.cancel_scope.cancel()
File "/src/services/api/.venv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/middleware/base.py", line 30, in coro
await self.app(scope, request.receive, send_stream.send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__
await route.handle(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle
await self.app(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/routing.py", line 64, in app
await response(scope, receive, send)
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/responses.py", line 139, in __call__
await send({"type": "http.response.body", "body": self.body})
File "/src/services/api/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 68, in sender
await send(message)
File "/src/services/api/.venv/lib/python3.9/site-packages/anyio/streams/memory.py", line 221, in send
raise BrokenResourceError
anyio.BrokenResourceError
```
It seems like a known error: https://github.com/tiangolo/fastapi/issues/4041.
Possibly due to a middleware (maybe `PrometheusMiddleware` here: https://github.com/huggingface/datasets-server/blob/main/services/api/src/api/app.py#L48) | closed | 2022-05-17T13:32:24Z | 2022-05-18T21:12:36Z | 2022-05-17T13:42:50Z | severo |
1,238,588,897 | fix: π the block list must be a comma-separated list | We also reduce the list to the only three datasets that really cannot
don't seem to be possible to manage for now. | fix: π the block list must be a comma-separated list: We also reduce the list to the only three datasets that really cannot
don't seem to be possible to manage for now. | closed | 2022-05-17T12:46:31Z | 2022-05-17T12:46:38Z | 2022-05-17T12:46:37Z | severo |
1,238,561,489 | Upgrade webhooks to version 2 | See https://github.com/huggingface/moon-landing/blob/main/server/lib/HFWebhooks.ts. We are currently receiving webhooks in format v1 | Upgrade webhooks to version 2: See https://github.com/huggingface/moon-landing/blob/main/server/lib/HFWebhooks.ts. We are currently receiving webhooks in format v1 | closed | 2022-05-17T12:23:52Z | 2022-09-19T08:59:21Z | 2022-09-19T08:59:21Z | severo |
1,238,347,340 | feat: πΈ enable monitoringin prod | null | feat: πΈ enable monitoringin prod: | closed | 2022-05-17T09:26:11Z | 2022-05-17T09:34:05Z | 2022-05-17T09:34:04Z | severo |
1,238,274,765 | feat: πΈ add the admin service (to run admin scripts) | null | feat: πΈ add the admin service (to run admin scripts): | closed | 2022-05-17T08:31:08Z | 2022-05-17T09:09:28Z | 2022-05-17T09:09:28Z | severo |
1,238,221,930 | fix: π fix nfs mount | In production, the nodes must be selected with `role-datasets-server:
'true'` to have access to the NFS. Fixes #270 | fix: π fix nfs mount: In production, the nodes must be selected with `role-datasets-server:
'true'` to have access to the NFS. Fixes #270 | closed | 2022-05-17T07:50:20Z | 2022-05-17T07:50:26Z | 2022-05-17T07:50:25Z | severo |
1,237,176,195 | Run /metrics on another port | See https://github.com/huggingface/datasets-server/pull/260#discussion_r873383737
See also how it's done in Go for the tensorboard launcher: https://github.com/huggingface/tensorboard-launcher/blob/46df45821e8311095e824bc39b9acdadaf99634c/launcher/pkg/cmd/launcher/main.go#L61 | Run /metrics on another port: See https://github.com/huggingface/datasets-server/pull/260#discussion_r873383737
See also how it's done in Go for the tensorboard launcher: https://github.com/huggingface/tensorboard-launcher/blob/46df45821e8311095e824bc39b9acdadaf99634c/launcher/pkg/cmd/launcher/main.go#L61 | closed | 2022-05-16T13:36:55Z | 2022-09-16T17:42:35Z | 2022-09-16T17:42:35Z | severo |
1,237,107,489 | Upgrade worker | null | Upgrade worker: | closed | 2022-05-16T12:47:43Z | 2022-05-16T12:47:49Z | 2022-05-16T12:47:48Z | severo |
1,237,030,045 | fix: π fix the query to get the list of jobs in the queue | we did a lot of unnecessary lookup. | fix: π fix the query to get the list of jobs in the queue: we did a lot of unnecessary lookup. | closed | 2022-05-16T11:43:26Z | 2022-05-16T12:26:46Z | 2022-05-16T11:43:35Z | severo |
1,235,593,237 | Sometimes the NFS volume doesn't mount | ```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m44s default-scheduler Successfully assigned datasets-server/datasets-server-prod-api-6658dfb778-t5kh5 to ip-10-0-30-134.ec2.internal
Warning FailedMount 102s kubelet MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs svm-0adb40782285e2ec6.fs-0220b222fb471f3b9.fsx.us-east-1.amazonaws.com:/fsx /var/lib/kubelet/pods/5088f644-f128-4b8c-b7e3-a9058904ac8e/volumes/kubernetes.io~nfs/nfs
Output: mount.nfs: Connection timed out
Warning FailedMount 27s (x2 over 2m41s) kubelet Unable to attach or mount volumes: unmounted volumes=[nfs], unattached volumes=[nfs kube-api-access-rhbw9]: timed out waiting for the condition
``` | Sometimes the NFS volume doesn't mount: ```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m44s default-scheduler Successfully assigned datasets-server/datasets-server-prod-api-6658dfb778-t5kh5 to ip-10-0-30-134.ec2.internal
Warning FailedMount 102s kubelet MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs svm-0adb40782285e2ec6.fs-0220b222fb471f3b9.fsx.us-east-1.amazonaws.com:/fsx /var/lib/kubelet/pods/5088f644-f128-4b8c-b7e3-a9058904ac8e/volumes/kubernetes.io~nfs/nfs
Output: mount.nfs: Connection timed out
Warning FailedMount 27s (x2 over 2m41s) kubelet Unable to attach or mount volumes: unmounted volumes=[nfs], unattached volumes=[nfs kube-api-access-rhbw9]: timed out waiting for the condition
``` | closed | 2022-05-13T19:13:01Z | 2022-05-17T07:50:25Z | 2022-05-17T07:50:25Z | severo |
1,235,588,394 | feat: πΈ upgrade image | null | feat: πΈ upgrade image: | closed | 2022-05-13T19:06:42Z | 2022-05-13T19:06:49Z | 2022-05-13T19:06:48Z | severo |
1,235,576,551 | fix: π fix loop | null | fix: π fix loop: | closed | 2022-05-13T18:52:20Z | 2022-05-13T18:52:28Z | 2022-05-13T18:52:27Z | severo |
1,235,572,992 | feat: πΈ upgrade images | null | feat: πΈ upgrade images: | closed | 2022-05-13T18:47:34Z | 2022-05-13T18:47:40Z | 2022-05-13T18:47:39Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.