id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
⌀ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,938,185,944 | clean duckdb index files | Currently, we share the duckdb EFS volume for:
- downloads (247 G)
- job runner index processing

For downloads folder cleaning, job "delete-indexes" used to do the work but it looks like the volume is being filled because of remaining data when a job runner crashes because of long-running (zombie) and https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/_job_runner_with_cache.py#L66 is never called.
This PR add will apply same logic as for existing for hf_datasets_cache but:
- will change the subfolder where the job runner indexes are created under /storage/duckdb-index/job_runner
- will apply an expiration time for storage/duckdb-index/job_runner folder to 3 hours in prod
- will apply an expiration time for storage/duckdb-index/downloads folder to 3 days in prod
- will reuse logic in hf_datasets_cache for both cleaning (downloads and job_runner folders)
| clean duckdb index files: Currently, we share the duckdb EFS volume for:
- downloads (247 G)
- job runner index processing

For downloads folder cleaning, job "delete-indexes" used to do the work but it looks like the volume is being filled because of remaining data when a job runner crashes because of long-running (zombie) and https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/_job_runner_with_cache.py#L66 is never called.
This PR add will apply same logic as for existing for hf_datasets_cache but:
- will change the subfolder where the job runner indexes are created under /storage/duckdb-index/job_runner
- will apply an expiration time for storage/duckdb-index/job_runner folder to 3 hours in prod
- will apply an expiration time for storage/duckdb-index/downloads folder to 3 days in prod
- will reuse logic in hf_datasets_cache for both cleaning (downloads and job_runner folders)
| closed | 2023-10-11T16:11:32Z | 2023-10-12T15:36:24Z | 2023-10-12T15:36:23Z | AndreaFrancis |
1,937,973,073 | Add bonus difficulty to `split-descriptive-statistics` for big datasets | hopefully will fix ` JobManagerExceededMaximumDurationError` for `split-descriptive-statistics` for big datasets like https://huggingface.co/datasets/Open-Orca/OpenOrca. It appears that loading data to in-memory table takes too much time
We can try like this and meanwhile I'm exploring other options how to query faster on big data | Add bonus difficulty to `split-descriptive-statistics` for big datasets: hopefully will fix ` JobManagerExceededMaximumDurationError` for `split-descriptive-statistics` for big datasets like https://huggingface.co/datasets/Open-Orca/OpenOrca. It appears that loading data to in-memory table takes too much time
We can try like this and meanwhile I'm exploring other options how to query faster on big data | closed | 2023-10-11T14:43:52Z | 2023-10-11T14:59:53Z | 2023-10-11T14:59:51Z | polinaeterna |
1,937,958,008 | Fix clean hf datasets cache pattern | it wasn't deleting anything because the pattern was wrong | Fix clean hf datasets cache pattern: it wasn't deleting anything because the pattern was wrong | closed | 2023-10-11T14:38:11Z | 2023-10-11T15:02:08Z | 2023-10-11T15:02:06Z | lhoestq |
1,937,957,982 | Factorize getting required request parameters | Factorize getting the required request parameters `dataset`, `config`, `split`, `query` and `where` (from the service admin and the endpoints /rows, /search and /filter) to `libapi`.
Additionally, align their error messages. | Factorize getting required request parameters: Factorize getting the required request parameters `dataset`, `config`, `split`, `query` and `where` (from the service admin and the endpoints /rows, /search and /filter) to `libapi`.
Additionally, align their error messages. | closed | 2023-10-11T14:38:11Z | 2023-10-12T05:58:43Z | 2023-10-12T05:58:14Z | albertvillanova |
1,937,562,368 | fix: 🐛 ensure the order of discussion comments | see https://github.com/huggingface/moon-landing/issues/7729#issuecomment-1757276965 (internal) | fix: 🐛 ensure the order of discussion comments: see https://github.com/huggingface/moon-landing/issues/7729#issuecomment-1757276965 (internal) | closed | 2023-10-11T11:39:45Z | 2023-10-11T11:42:40Z | 2023-10-11T11:42:39Z | severo |
1,937,202,667 | Update gitpython to 3.1.37 to fix vulnerability | Update `gitpython` to 3.1.37 to fix vulnerability.
This should fix 10 dependabot alerts.
Supersede and close #1958.
Supersede and close #1959. | Update gitpython to 3.1.37 to fix vulnerability: Update `gitpython` to 3.1.37 to fix vulnerability.
This should fix 10 dependabot alerts.
Supersede and close #1958.
Supersede and close #1959. | closed | 2023-10-11T08:43:08Z | 2023-10-11T10:47:10Z | 2023-10-11T10:47:08Z | albertvillanova |
1,937,116,043 | Install dependency `music_tag`? | Requested here: https://huggingface.co/datasets/zeio/baneks-speech/discussions/1 | Install dependency `music_tag`?: Requested here: https://huggingface.co/datasets/zeio/baneks-speech/discussions/1 | closed | 2023-10-11T08:07:53Z | 2024-02-02T17:18:50Z | 2024-02-02T17:18:50Z | severo |
1,937,109,025 | upgrade nginx? | https://www.nginx.com/blog/http-2-rapid-reset-attack-impacting-f5-nginx-products/ | upgrade nginx?: https://www.nginx.com/blog/http-2-rapid-reset-attack-impacting-f5-nginx-products/ | closed | 2023-10-11T08:03:33Z | 2023-11-04T09:30:39Z | 2023-11-04T09:30:39Z | severo |
1,937,105,973 | docs: ✏️ add two details to the Parquet docs | 1. refs/convert/parquet lives in parallel to `main` and is not meant to be merged, 2. the parquet native datasets are not converted | docs: ✏️ add two details to the Parquet docs: 1. refs/convert/parquet lives in parallel to `main` and is not meant to be merged, 2. the parquet native datasets are not converted | closed | 2023-10-11T08:01:38Z | 2023-10-11T11:35:31Z | 2023-10-11T11:34:57Z | severo |
1,936,244,399 | build(deps-dev): bump gitpython from 3.1.36 to 3.1.37 in /libs/libcommon | Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.36 to 3.1.37.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.37 - a proper fix CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Improve Python version and OS compatibility, fixing deprecations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1654">gitpython-developers/GitPython#1654</a></li>
<li>Better document env_case test/fixture and cwd by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1657">gitpython-developers/GitPython#1657</a></li>
<li>Remove spurious executable permissions by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1658">gitpython-developers/GitPython#1658</a></li>
<li>Fix up checks in Makefile and make them portable by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1661">gitpython-developers/GitPython#1661</a></li>
<li>Fix URLs that were redirecting to another license by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1662">gitpython-developers/GitPython#1662</a></li>
<li>Assorted small fixes/improvements to root dir docs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1663">gitpython-developers/GitPython#1663</a></li>
<li>Use venv instead of virtualenv in test_installation by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1664">gitpython-developers/GitPython#1664</a></li>
<li>Omit py_modules in setup by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1665">gitpython-developers/GitPython#1665</a></li>
<li>Don't track code coverage temporary files by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1666">gitpython-developers/GitPython#1666</a></li>
<li>Configure tox by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1667">gitpython-developers/GitPython#1667</a></li>
<li>Format tests with black and auto-exclude untracked paths by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1668">gitpython-developers/GitPython#1668</a></li>
<li>Upgrade and broaden flake8, fixing style problems and bugs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1673">gitpython-developers/GitPython#1673</a></li>
<li>Fix rollback bug in SymbolicReference.set_reference by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1675">gitpython-developers/GitPython#1675</a></li>
<li>Remove <code>@NoEffect</code> annotations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1677">gitpython-developers/GitPython#1677</a></li>
<li>Add more checks for the validity of refnames by <a href="https://github.com/facutuesca"><code>@facutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1672">gitpython-developers/GitPython#1672</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/b27a89f683cda85ebd78243c055e876282df89ee"><code>b27a89f</code></a> fix makefile to compare commit hashes only</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/0bd2890ef42a7506b81a96c3c94b064917ed0d7b"><code>0bd2890</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/832b6eeb4a14e669099c486862c9f568215d5afb"><code>832b6ee</code></a> remove unnecessary list comprehension to fix CI</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e98f57b81f792f0f5e18d33ee658ae395f9aa3c4"><code>e98f57b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1672">#1672</a> from trail-of-forks/robust-refname-checks</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/1774f1e32307deb755f80dc51b220566c7aef755"><code>1774f1e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1677">#1677</a> from EliahKagan/no-noeffect</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a4701a0f17308ec8d4b5871e6e2a95c4e2ca5b91"><code>a4701a0</code></a> Remove <code>@NoEffect</code> annotations</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d40320b823994ed908d8a5e236758ff525851cd4"><code>d40320b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1675">#1675</a> from EliahKagan/rollback</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d1c1f31dbd4a4fd527f9f3ff2ea901abf023c46b"><code>d1c1f31</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1673">#1673</a> from EliahKagan/flake8</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e480985aa4d358d0cc167d4552910e85944b8966"><code>e480985</code></a> Tweak rollback logic in log.to_file</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ff84b26445b147ee9e2c75d82903b0c6b09e2b7a"><code>ff84b26</code></a> Refactor try-finally cleanup in git/</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | build(deps-dev): bump gitpython from 3.1.36 to 3.1.37 in /libs/libcommon: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.36 to 3.1.37.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.37 - a proper fix CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Improve Python version and OS compatibility, fixing deprecations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1654">gitpython-developers/GitPython#1654</a></li>
<li>Better document env_case test/fixture and cwd by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1657">gitpython-developers/GitPython#1657</a></li>
<li>Remove spurious executable permissions by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1658">gitpython-developers/GitPython#1658</a></li>
<li>Fix up checks in Makefile and make them portable by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1661">gitpython-developers/GitPython#1661</a></li>
<li>Fix URLs that were redirecting to another license by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1662">gitpython-developers/GitPython#1662</a></li>
<li>Assorted small fixes/improvements to root dir docs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1663">gitpython-developers/GitPython#1663</a></li>
<li>Use venv instead of virtualenv in test_installation by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1664">gitpython-developers/GitPython#1664</a></li>
<li>Omit py_modules in setup by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1665">gitpython-developers/GitPython#1665</a></li>
<li>Don't track code coverage temporary files by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1666">gitpython-developers/GitPython#1666</a></li>
<li>Configure tox by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1667">gitpython-developers/GitPython#1667</a></li>
<li>Format tests with black and auto-exclude untracked paths by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1668">gitpython-developers/GitPython#1668</a></li>
<li>Upgrade and broaden flake8, fixing style problems and bugs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1673">gitpython-developers/GitPython#1673</a></li>
<li>Fix rollback bug in SymbolicReference.set_reference by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1675">gitpython-developers/GitPython#1675</a></li>
<li>Remove <code>@NoEffect</code> annotations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1677">gitpython-developers/GitPython#1677</a></li>
<li>Add more checks for the validity of refnames by <a href="https://github.com/facutuesca"><code>@facutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1672">gitpython-developers/GitPython#1672</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/b27a89f683cda85ebd78243c055e876282df89ee"><code>b27a89f</code></a> fix makefile to compare commit hashes only</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/0bd2890ef42a7506b81a96c3c94b064917ed0d7b"><code>0bd2890</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/832b6eeb4a14e669099c486862c9f568215d5afb"><code>832b6ee</code></a> remove unnecessary list comprehension to fix CI</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e98f57b81f792f0f5e18d33ee658ae395f9aa3c4"><code>e98f57b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1672">#1672</a> from trail-of-forks/robust-refname-checks</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/1774f1e32307deb755f80dc51b220566c7aef755"><code>1774f1e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1677">#1677</a> from EliahKagan/no-noeffect</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a4701a0f17308ec8d4b5871e6e2a95c4e2ca5b91"><code>a4701a0</code></a> Remove <code>@NoEffect</code> annotations</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d40320b823994ed908d8a5e236758ff525851cd4"><code>d40320b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1675">#1675</a> from EliahKagan/rollback</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d1c1f31dbd4a4fd527f9f3ff2ea901abf023c46b"><code>d1c1f31</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1673">#1673</a> from EliahKagan/flake8</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e480985aa4d358d0cc167d4552910e85944b8966"><code>e480985</code></a> Tweak rollback logic in log.to_file</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ff84b26445b147ee9e2c75d82903b0c6b09e2b7a"><code>ff84b26</code></a> Refactor try-finally cleanup in git/</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-10-10T21:01:31Z | 2023-10-11T10:47:21Z | 2023-10-11T10:47:10Z | dependabot[bot] |
1,936,243,588 | build(deps-dev): bump gitpython from 3.1.36 to 3.1.37 in /e2e | Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.36 to 3.1.37.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.37 - a proper fix CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Improve Python version and OS compatibility, fixing deprecations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1654">gitpython-developers/GitPython#1654</a></li>
<li>Better document env_case test/fixture and cwd by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1657">gitpython-developers/GitPython#1657</a></li>
<li>Remove spurious executable permissions by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1658">gitpython-developers/GitPython#1658</a></li>
<li>Fix up checks in Makefile and make them portable by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1661">gitpython-developers/GitPython#1661</a></li>
<li>Fix URLs that were redirecting to another license by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1662">gitpython-developers/GitPython#1662</a></li>
<li>Assorted small fixes/improvements to root dir docs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1663">gitpython-developers/GitPython#1663</a></li>
<li>Use venv instead of virtualenv in test_installation by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1664">gitpython-developers/GitPython#1664</a></li>
<li>Omit py_modules in setup by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1665">gitpython-developers/GitPython#1665</a></li>
<li>Don't track code coverage temporary files by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1666">gitpython-developers/GitPython#1666</a></li>
<li>Configure tox by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1667">gitpython-developers/GitPython#1667</a></li>
<li>Format tests with black and auto-exclude untracked paths by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1668">gitpython-developers/GitPython#1668</a></li>
<li>Upgrade and broaden flake8, fixing style problems and bugs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1673">gitpython-developers/GitPython#1673</a></li>
<li>Fix rollback bug in SymbolicReference.set_reference by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1675">gitpython-developers/GitPython#1675</a></li>
<li>Remove <code>@NoEffect</code> annotations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1677">gitpython-developers/GitPython#1677</a></li>
<li>Add more checks for the validity of refnames by <a href="https://github.com/facutuesca"><code>@facutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1672">gitpython-developers/GitPython#1672</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/b27a89f683cda85ebd78243c055e876282df89ee"><code>b27a89f</code></a> fix makefile to compare commit hashes only</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/0bd2890ef42a7506b81a96c3c94b064917ed0d7b"><code>0bd2890</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/832b6eeb4a14e669099c486862c9f568215d5afb"><code>832b6ee</code></a> remove unnecessary list comprehension to fix CI</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e98f57b81f792f0f5e18d33ee658ae395f9aa3c4"><code>e98f57b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1672">#1672</a> from trail-of-forks/robust-refname-checks</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/1774f1e32307deb755f80dc51b220566c7aef755"><code>1774f1e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1677">#1677</a> from EliahKagan/no-noeffect</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a4701a0f17308ec8d4b5871e6e2a95c4e2ca5b91"><code>a4701a0</code></a> Remove <code>@NoEffect</code> annotations</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d40320b823994ed908d8a5e236758ff525851cd4"><code>d40320b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1675">#1675</a> from EliahKagan/rollback</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d1c1f31dbd4a4fd527f9f3ff2ea901abf023c46b"><code>d1c1f31</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1673">#1673</a> from EliahKagan/flake8</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e480985aa4d358d0cc167d4552910e85944b8966"><code>e480985</code></a> Tweak rollback logic in log.to_file</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ff84b26445b147ee9e2c75d82903b0c6b09e2b7a"><code>ff84b26</code></a> Refactor try-finally cleanup in git/</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | build(deps-dev): bump gitpython from 3.1.36 to 3.1.37 in /e2e: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.36 to 3.1.37.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.37 - a proper fix CVE-2023-41040</h2>
<h2>What's Changed</h2>
<ul>
<li>Improve Python version and OS compatibility, fixing deprecations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1654">gitpython-developers/GitPython#1654</a></li>
<li>Better document env_case test/fixture and cwd by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1657">gitpython-developers/GitPython#1657</a></li>
<li>Remove spurious executable permissions by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1658">gitpython-developers/GitPython#1658</a></li>
<li>Fix up checks in Makefile and make them portable by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1661">gitpython-developers/GitPython#1661</a></li>
<li>Fix URLs that were redirecting to another license by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1662">gitpython-developers/GitPython#1662</a></li>
<li>Assorted small fixes/improvements to root dir docs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1663">gitpython-developers/GitPython#1663</a></li>
<li>Use venv instead of virtualenv in test_installation by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1664">gitpython-developers/GitPython#1664</a></li>
<li>Omit py_modules in setup by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1665">gitpython-developers/GitPython#1665</a></li>
<li>Don't track code coverage temporary files by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1666">gitpython-developers/GitPython#1666</a></li>
<li>Configure tox by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1667">gitpython-developers/GitPython#1667</a></li>
<li>Format tests with black and auto-exclude untracked paths by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1668">gitpython-developers/GitPython#1668</a></li>
<li>Upgrade and broaden flake8, fixing style problems and bugs by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1673">gitpython-developers/GitPython#1673</a></li>
<li>Fix rollback bug in SymbolicReference.set_reference by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1675">gitpython-developers/GitPython#1675</a></li>
<li>Remove <code>@NoEffect</code> annotations by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1677">gitpython-developers/GitPython#1677</a></li>
<li>Add more checks for the validity of refnames by <a href="https://github.com/facutuesca"><code>@facutuesca</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1672">gitpython-developers/GitPython#1672</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/b27a89f683cda85ebd78243c055e876282df89ee"><code>b27a89f</code></a> fix makefile to compare commit hashes only</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/0bd2890ef42a7506b81a96c3c94b064917ed0d7b"><code>0bd2890</code></a> prepare next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/832b6eeb4a14e669099c486862c9f568215d5afb"><code>832b6ee</code></a> remove unnecessary list comprehension to fix CI</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e98f57b81f792f0f5e18d33ee658ae395f9aa3c4"><code>e98f57b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1672">#1672</a> from trail-of-forks/robust-refname-checks</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/1774f1e32307deb755f80dc51b220566c7aef755"><code>1774f1e</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1677">#1677</a> from EliahKagan/no-noeffect</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a4701a0f17308ec8d4b5871e6e2a95c4e2ca5b91"><code>a4701a0</code></a> Remove <code>@NoEffect</code> annotations</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d40320b823994ed908d8a5e236758ff525851cd4"><code>d40320b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1675">#1675</a> from EliahKagan/rollback</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/d1c1f31dbd4a4fd527f9f3ff2ea901abf023c46b"><code>d1c1f31</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1673">#1673</a> from EliahKagan/flake8</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/e480985aa4d358d0cc167d4552910e85944b8966"><code>e480985</code></a> Tweak rollback logic in log.to_file</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/ff84b26445b147ee9e2c75d82903b0c6b09e2b7a"><code>ff84b26</code></a> Refactor try-finally cleanup in git/</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.36...3.1.37">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-10-10T21:01:09Z | 2023-10-11T10:47:20Z | 2023-10-11T10:47:10Z | dependabot[bot] |
1,935,261,283 | autoconverted parquet file has too big cells | See https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera/discussions/1#6523d448b623a04e6c2f118a
>
> From the logs I see this error
>
> TooBigRows: Rows from parquet row groups are too big to be read: 313.33 MiB (max=286.10 MiB)
>
> It looks like an issue on our side: the row groups in the parquet files at https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera/tree/refs%2Fconvert%2Fparquet/default/train are too big to be read by the api. We'll investigate this, thanks for reporting
> | autoconverted parquet file has too big cells: See https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera/discussions/1#6523d448b623a04e6c2f118a
>
> From the logs I see this error
>
> TooBigRows: Rows from parquet row groups are too big to be read: 313.33 MiB (max=286.10 MiB)
>
> It looks like an issue on our side: the row groups in the parquet files at https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera/tree/refs%2Fconvert%2Fparquet/default/train are too big to be read by the api. We'll investigate this, thanks for reporting
> | open | 2023-10-10T12:39:17Z | 2024-08-02T08:54:20Z | null | severo |
1,935,246,150 | upgrade hfh to 0.18.0? | https://github.com/huggingface/huggingface_hub/releases/tag/v0.18.0 | upgrade hfh to 0.18.0?: https://github.com/huggingface/huggingface_hub/releases/tag/v0.18.0 | closed | 2023-10-10T12:33:04Z | 2023-11-16T11:47:04Z | 2023-11-16T11:47:04Z | severo |
1,934,747,968 | Update fastapi to 0.103.2 to fix vulnerability | This should fix 1 dependabot alert. | Update fastapi to 0.103.2 to fix vulnerability: This should fix 1 dependabot alert. | closed | 2023-10-10T08:24:57Z | 2023-10-10T13:15:46Z | 2023-10-10T13:15:45Z | albertvillanova |
1,934,676,935 | Fix UnexpectedError for non-integer length/offset | Fix `UnexpectedError` for non-integer length/offset and raise `InvalidParameterError` instead. | Fix UnexpectedError for non-integer length/offset: Fix `UnexpectedError` for non-integer length/offset and raise `InvalidParameterError` instead. | closed | 2023-10-10T08:01:46Z | 2023-10-10T13:18:30Z | 2023-10-10T13:17:54Z | albertvillanova |
1,933,249,977 | switch hardcoded `MAX_NUM_STRING_LABELS` from 30 to some relative number (n_unique / n_samples) | null | switch hardcoded `MAX_NUM_STRING_LABELS` from 30 to some relative number (n_unique / n_samples): | closed | 2023-10-09T14:44:49Z | 2024-06-19T14:28:15Z | 2024-06-19T14:28:15Z | polinaeterna |
1,933,148,365 | filter parameter should accept any character? | https://datasets-server.huggingface.co/filter?dataset=polinaeterna/delays_nans&config=default&split=train&where=string_col=йопта&offset=0&limit=100
gives an error
```
{"error":"Parameter 'where' is invalid"}
``` | filter parameter should accept any character?: https://datasets-server.huggingface.co/filter?dataset=polinaeterna/delays_nans&config=default&split=train&where=string_col=йопта&offset=0&limit=100
gives an error
```
{"error":"Parameter 'where' is invalid"}
``` | closed | 2023-10-09T13:59:20Z | 2023-10-09T17:26:15Z | 2023-10-09T17:26:15Z | severo |
1,933,039,540 | Fix cronjob resources | Certain jobs were not working because of
```
WARNING: 2023-10-09 12:00:35,208 - root - The connection to the cache database could not be established. The action is skipped.
```
even though they don't require a connection to the cache or queue databases.
Affected jobs:
- clean-hf-datasets-cache
- delete-indexes | Fix cronjob resources: Certain jobs were not working because of
```
WARNING: 2023-10-09 12:00:35,208 - root - The connection to the cache database could not be established. The action is skipped.
```
even though they don't require a connection to the cache or queue databases.
Affected jobs:
- clean-hf-datasets-cache
- delete-indexes | closed | 2023-10-09T13:12:14Z | 2023-10-09T13:17:44Z | 2023-10-09T13:17:43Z | lhoestq |
1,932,535,875 | Unblock viewer of alexandrainst/nota dataset | Unblock alexandrainst/nota dataset.
See discussion on the Hub: https://huggingface.co/datasets/alexandrainst/nota/discussions/1 | Unblock viewer of alexandrainst/nota dataset: Unblock alexandrainst/nota dataset.
See discussion on the Hub: https://huggingface.co/datasets/alexandrainst/nota/discussions/1 | closed | 2023-10-09T08:17:53Z | 2023-10-10T08:36:58Z | 2023-10-10T08:36:57Z | albertvillanova |
1,932,504,480 | Make CI quality verify consistency of poetry.lock | Add check to CI quality, so that we verify that the `poetry.lock` file is consistent with the `pyproject.toml`.
@severo this verification also checks if the dependencies of a service are consistent, for example if the dependencies of `libcommon` or `libapi` have been previously updated. See related discussion in: https://github.com/huggingface/datasets-server/pull/1943#issuecomment-1750964793
Note that this approach does not avoid skipping updating the dependencies of the `libs` without updating the dependencies in the dependent services, but at least will guarantee that the dependencies of the services are updated as soon as possible afterwards.
The CI should be green once other PRs (that update the dependencies of the services after the updates of the libs) are merged:
- [x] #1942
- [x] #1943
See "services/rows / quality / code-quality": https://github.com/huggingface/datasets-server/actions/runs/6453845913/job/17518191315
```
> Run poetry lock --no-update --check
Error: poetry.lock is not consistent with pyproject.toml. Run `poetry lock [--no-update]` to fix it.
Error: Process completed with exit code 1.
``` | Make CI quality verify consistency of poetry.lock: Add check to CI quality, so that we verify that the `poetry.lock` file is consistent with the `pyproject.toml`.
@severo this verification also checks if the dependencies of a service are consistent, for example if the dependencies of `libcommon` or `libapi` have been previously updated. See related discussion in: https://github.com/huggingface/datasets-server/pull/1943#issuecomment-1750964793
Note that this approach does not avoid skipping updating the dependencies of the `libs` without updating the dependencies in the dependent services, but at least will guarantee that the dependencies of the services are updated as soon as possible afterwards.
The CI should be green once other PRs (that update the dependencies of the services after the updates of the libs) are merged:
- [x] #1942
- [x] #1943
See "services/rows / quality / code-quality": https://github.com/huggingface/datasets-server/actions/runs/6453845913/job/17518191315
```
> Run poetry lock --no-update --check
Error: poetry.lock is not consistent with pyproject.toml. Run `poetry lock [--no-update]` to fix it.
Error: Process completed with exit code 1.
``` | closed | 2023-10-09T07:57:06Z | 2023-10-10T14:07:42Z | 2023-10-10T14:07:41Z | albertvillanova |
1,930,771,724 | add ExternalServerError in error codes to retry for backfilling | Currently, we have 856 cached records with error ExternalServerError. This error is thrown when connection to Spawning AI API is not working:
`Error when trying to connect to https://opts-api.spawningaiapi.com/api/v2/query/urls: '{'detail': 'An unknown error has occurred.'}'`
We should retry those jobs because it is not related to an error with our logic but with the connection.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.countDocuments({error_code:"ExternalServerError"})
856
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.distinct("kind", {error_code:"ExternalServerError"})
[ 'split-opt-in-out-urls-count', 'split-opt-in-out-urls-scan' ]
```
-------------------------
Also removing NoIndexableColumnsError from the list since the exception no longer exist on the code and all the old entries have been refreshed.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.countDocuments({error_code:"NoIndexableColumnsError"})
0
``` | add ExternalServerError in error codes to retry for backfilling: Currently, we have 856 cached records with error ExternalServerError. This error is thrown when connection to Spawning AI API is not working:
`Error when trying to connect to https://opts-api.spawningaiapi.com/api/v2/query/urls: '{'detail': 'An unknown error has occurred.'}'`
We should retry those jobs because it is not related to an error with our logic but with the connection.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.countDocuments({error_code:"ExternalServerError"})
856
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.distinct("kind", {error_code:"ExternalServerError"})
[ 'split-opt-in-out-urls-count', 'split-opt-in-out-urls-scan' ]
```
-------------------------
Also removing NoIndexableColumnsError from the list since the exception no longer exist on the code and all the old entries have been refreshed.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.countDocuments({error_code:"NoIndexableColumnsError"})
0
``` | closed | 2023-10-06T18:44:22Z | 2023-10-09T12:15:21Z | 2023-10-09T12:15:20Z | AndreaFrancis |
1,930,552,155 | Fix clean datasets cache action name | 🙈 | Fix clean datasets cache action name: 🙈 | closed | 2023-10-06T16:21:01Z | 2023-10-06T16:57:29Z | 2023-10-06T16:57:28Z | lhoestq |
1,930,342,534 | Remove duplicate error types in admin service | This PR removes the duplicate error types defined in the `admin` service and use the ones defined in `libapi`.
This PR needs the following PR to be merged first:
- [x] #1945
Otherwise, see "services/admin / quality / code-quality": https://github.com/huggingface/datasets-server/actions/runs/6433158237/job/17469627449
```
Because admin depends on libapi (0.1.0) @ file:///home/runner/work/datasets-server/datasets-server/libs/libapi which depends on starlette (^0.28.0), starlette is required.
So, because admin depends on starlette (^0.27.0), version solving failed.
``` | Remove duplicate error types in admin service: This PR removes the duplicate error types defined in the `admin` service and use the ones defined in `libapi`.
This PR needs the following PR to be merged first:
- [x] #1945
Otherwise, see "services/admin / quality / code-quality": https://github.com/huggingface/datasets-server/actions/runs/6433158237/job/17469627449
```
Because admin depends on libapi (0.1.0) @ file:///home/runner/work/datasets-server/datasets-server/libs/libapi which depends on starlette (^0.28.0), starlette is required.
So, because admin depends on starlette (^0.27.0), version solving failed.
``` | closed | 2023-10-06T14:50:20Z | 2023-10-10T15:06:23Z | 2023-10-10T15:06:22Z | albertvillanova |
1,930,326,851 | Align starlette to 0.28.0 in all subpackages | This PR aligns the `starlette` version to 0.28.0 in all subpackages.
Note that before this PR:
- in the `admin` and `worker` services, `starlette` was pinned to 0.27.0
- whereas in the `libapi`, `starlette` was pinned to 0.28.0
Thus, the `admin` and `worker` services could not use the `libapi` (subsequent PR). | Align starlette to 0.28.0 in all subpackages: This PR aligns the `starlette` version to 0.28.0 in all subpackages.
Note that before this PR:
- in the `admin` and `worker` services, `starlette` was pinned to 0.27.0
- whereas in the `libapi`, `starlette` was pinned to 0.28.0
Thus, the `admin` and `worker` services could not use the `libapi` (subsequent PR). | closed | 2023-10-06T14:44:11Z | 2023-10-09T12:48:37Z | 2023-10-09T12:48:36Z | albertvillanova |
1,930,192,066 | fix staging bucket config | null | fix staging bucket config: | closed | 2023-10-06T13:30:29Z | 2023-10-06T13:31:38Z | 2023-10-06T13:31:37Z | AndreaFrancis |
1,930,078,937 | Update libapi duckdb dependency in all services | Now that `libapi` depends on `duckdb`, this PR updates `libapi` in all services. | Update libapi duckdb dependency in all services: Now that `libapi` depends on `duckdb`, this PR updates `libapi` in all services. | closed | 2023-10-06T12:31:48Z | 2023-10-09T10:26:25Z | 2023-10-09T10:26:24Z | albertvillanova |
1,929,996,844 | Remove boto3 as dev dependency | After the merge of #1418, `boto3` is no longer necessary as a dev dependency in ~~rows and~~ search services.
Note that the dependency on `boto3` is indirectly set by the dependency on `libcommon`. | Remove boto3 as dev dependency: After the merge of #1418, `boto3` is no longer necessary as a dev dependency in ~~rows and~~ search services.
Note that the dependency on `boto3` is indirectly set by the dependency on `libcommon`. | closed | 2023-10-06T11:40:55Z | 2023-10-06T12:25:36Z | 2023-10-06T12:25:35Z | albertvillanova |
1,929,769,306 | fix(chart): remove nodeSelector in staging & prepare prod | Tests in staging are finished. Let's prepare the prod deployment (not yet planned) | fix(chart): remove nodeSelector in staging & prepare prod: Tests in staging are finished. Let's prepare the prod deployment (not yet planned) | closed | 2023-10-06T09:24:52Z | 2023-10-06T11:28:34Z | 2023-10-06T11:28:33Z | rtrompier |
1,929,750,787 | Replace UnexpectedError with UnexpectedApiError in filter | Replace `UnexpectedError` with `UnexpectedApiError` in filter endpoint, so that it is aligned with the other endpoints. This misalignment was introduced after the simultaneous merge of:
- #1475
- #1418 | Replace UnexpectedError with UnexpectedApiError in filter: Replace `UnexpectedError` with `UnexpectedApiError` in filter endpoint, so that it is aligned with the other endpoints. This misalignment was introduced after the simultaneous merge of:
- #1475
- #1418 | closed | 2023-10-06T09:13:38Z | 2023-10-06T09:30:17Z | 2023-10-06T09:30:16Z | albertvillanova |
1,928,961,496 | force download when previous file is obsolete | Related to https://github.com/huggingface/datasets-server/issues/1443 for cause_exception=`"OSError"`
For https://datasets-server.huggingface.co/search?dataset=xnli&config=all_languages&split=train&offset=0&length=1&query=language
```
OSError: Consistency check failed: file should be of size 301736502 but has size 293192246 (0002.parquet).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
```
See that there is an error in hf_hub_download, It looks like the library tried to compare the file with a previous one in the local cache. Maybe the data changed or there was a previously downloaded obsolete/corrupted file.
I think we should force-download each time to always have a refreshed parquet file.
| force download when previous file is obsolete: Related to https://github.com/huggingface/datasets-server/issues/1443 for cause_exception=`"OSError"`
For https://datasets-server.huggingface.co/search?dataset=xnli&config=all_languages&split=train&offset=0&length=1&query=language
```
OSError: Consistency check failed: file should be of size 301736502 but has size 293192246 (0002.parquet).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
```
See that there is an error in hf_hub_download, It looks like the library tried to compare the file with a previous one in the local cache. Maybe the data changed or there was a previously downloaded obsolete/corrupted file.
I think we should force-download each time to always have a refreshed parquet file.
| closed | 2023-10-05T19:54:20Z | 2023-10-09T12:36:27Z | 2023-10-09T12:33:32Z | AndreaFrancis |
1,928,731,713 | fix: adding support for other data types in orjson_dumps | Part of https://github.com/huggingface/datasets-server/issues/1443
Currently, we have 105 cache records with error types like:
```
'Type is not JSON serializable: Timedelta',
'Type is not JSON serializable: Timestamp',
'Type is not JSON serializable: datetime.timedelta',
'Type is not JSON serializable: numpy.int64',
'Type is not JSON serializable: numpy.ndarray'
```
The problem is at https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/utils.py#L120 trying to serialize the values.
After this PR, I will force refresh the affected splits (most of them are for `split-first-rows-from-streaming`)
Also there is an error for "split-descriptive-statistics":
`"Dict key must be str"
`
This was due to non-string keys in ClassLabels like ints. After this PR, I will force refresh the affected splits (49 records)
| fix: adding support for other data types in orjson_dumps: Part of https://github.com/huggingface/datasets-server/issues/1443
Currently, we have 105 cache records with error types like:
```
'Type is not JSON serializable: Timedelta',
'Type is not JSON serializable: Timestamp',
'Type is not JSON serializable: datetime.timedelta',
'Type is not JSON serializable: numpy.int64',
'Type is not JSON serializable: numpy.ndarray'
```
The problem is at https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/utils.py#L120 trying to serialize the values.
After this PR, I will force refresh the affected splits (most of them are for `split-first-rows-from-streaming`)
Also there is an error for "split-descriptive-statistics":
`"Dict key must be str"
`
This was due to non-string keys in ClassLabels like ints. After this PR, I will force refresh the affected splits (49 records)
| closed | 2023-10-05T17:13:44Z | 2023-10-09T17:32:58Z | 2023-10-06T18:34:40Z | AndreaFrancis |
1,928,613,825 | Do not pass None headers in e2e tests | This PR refactors e2e tests so that we do not pass unnecessary None headers (this is their default value). | Do not pass None headers in e2e tests: This PR refactors e2e tests so that we do not pass unnecessary None headers (this is their default value). | closed | 2023-10-05T15:56:45Z | 2023-10-06T09:28:30Z | 2023-10-06T09:28:29Z | albertvillanova |
1,928,530,105 | Update statistics documentation | TODO:
- [x] readme for worker
- [x] endpoint docs
- [x] update images of the viewer | Update statistics documentation: TODO:
- [x] readme for worker
- [x] endpoint docs
- [x] update images of the viewer | closed | 2023-10-05T15:11:46Z | 2023-10-10T09:57:46Z | 2023-10-10T09:53:55Z | polinaeterna |
1,928,320,804 | Fix missing index column in /filter | I found something missing in one line of the final code (maybe lost during one of the multiple merges of the main branch and subsequent conflict resolutions).
CC: @severo @lhoestq | Fix missing index column in /filter: I found something missing in one line of the final code (maybe lost during one of the multiple merges of the main branch and subsequent conflict resolutions).
CC: @severo @lhoestq | closed | 2023-10-05T13:38:52Z | 2023-10-05T14:57:33Z | 2023-10-05T14:57:32Z | albertvillanova |
1,928,090,733 | Factorize getting the request parameters length and offset | Factorize getting the request parameters `length` and `offset` from the service routes (/rows, /search and /filter) to `libapi`.
Additionally, align their error messages. | Factorize getting the request parameters length and offset: Factorize getting the request parameters `length` and `offset` from the service routes (/rows, /search and /filter) to `libapi`.
Additionally, align their error messages. | closed | 2023-10-05T11:46:06Z | 2023-10-09T12:43:50Z | 2023-10-09T12:43:08Z | albertvillanova |
1,928,004,729 | feat: 🎸 rename the discussion title to be clearer | reference: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1696502639784009 (internal)
| feat: 🎸 rename the discussion title to be clearer: reference: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1696502639784009 (internal)
| closed | 2023-10-05T10:58:59Z | 2023-10-05T11:20:56Z | 2023-10-05T11:20:55Z | severo |
1,927,813,625 | Fix e2e test_statistics_endpoint | Fix e2e `test_statistics_endpoint` after the simultaneous merge of:
- #1870
- #1418 | Fix e2e test_statistics_endpoint: Fix e2e `test_statistics_endpoint` after the simultaneous merge of:
- #1870
- #1418 | closed | 2023-10-05T09:30:03Z | 2023-10-05T11:17:36Z | 2023-10-05T11:17:35Z | albertvillanova |
1,927,743,351 | A dataset with "disabled" viewer has cache entries | For example, the dataset https://huggingface.co/datasets/irds/neuclir_1_zh has disabled the viewer:
> The Dataset Viewer has been disabled on this dataset.
But we have entries in the database:
https://datasets-server.huggingface.co/splits?dataset=irds/neuclir_1_zh
```json
{
"error": "The dataset tries to import a module that is not installed.",
"cause_exception": "ImportError",
"cause_message": "ir-datasets package missing; `pip install ir-datasets`",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/tmp/modules-cache/datasets_modules/datasets/irds--neuclir_1_zh/bb0b0253c1583dc19fc7decb4442fa2f8df4972ca05d8eccd8c5fdc2252851a2/neuclir_1_zh.py\", line 5, in <module>\n import ir_datasets\n",
"ModuleNotFoundError: No module named 'ir_datasets'\n",
"\nDuring handling of the above exception, another exception occurred:\n\n",
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 55, in compute_config_names_response\n for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 360, in get_dataset_config_names\n builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 197, in get_dataset_builder_class\n builder_cls = import_main_class(dataset_module.module_path)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 115, in import_main_class\n module = importlib.import_module(module_path)\n",
" File \"/usr/local/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/tmp/modules-cache/datasets_modules/datasets/irds--neuclir_1_zh/bb0b0253c1583dc19fc7decb4442fa2f8df4972ca05d8eccd8c5fdc2252851a2/neuclir_1_zh.py\", line 7, in <module>\n raise ImportError('ir-datasets package missing; `pip install ir-datasets`')\n",
"ImportError: ir-datasets package missing; `pip install ir-datasets`\n"
]
}
```
It should have been deleted:
- by the webhook received from the Hub that tells that the dataset viewer has been disabled (hmmm: I think we don't receive that, in fact)
- by the daily backfill job (hmmm, I think we don't check if the dataset viewer is still enabled) | A dataset with "disabled" viewer has cache entries : For example, the dataset https://huggingface.co/datasets/irds/neuclir_1_zh has disabled the viewer:
> The Dataset Viewer has been disabled on this dataset.
But we have entries in the database:
https://datasets-server.huggingface.co/splits?dataset=irds/neuclir_1_zh
```json
{
"error": "The dataset tries to import a module that is not installed.",
"cause_exception": "ImportError",
"cause_message": "ir-datasets package missing; `pip install ir-datasets`",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/tmp/modules-cache/datasets_modules/datasets/irds--neuclir_1_zh/bb0b0253c1583dc19fc7decb4442fa2f8df4972ca05d8eccd8c5fdc2252851a2/neuclir_1_zh.py\", line 5, in <module>\n import ir_datasets\n",
"ModuleNotFoundError: No module named 'ir_datasets'\n",
"\nDuring handling of the above exception, another exception occurred:\n\n",
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 55, in compute_config_names_response\n for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 360, in get_dataset_config_names\n builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 197, in get_dataset_builder_class\n builder_cls = import_main_class(dataset_module.module_path)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 115, in import_main_class\n module = importlib.import_module(module_path)\n",
" File \"/usr/local/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/tmp/modules-cache/datasets_modules/datasets/irds--neuclir_1_zh/bb0b0253c1583dc19fc7decb4442fa2f8df4972ca05d8eccd8c5fdc2252851a2/neuclir_1_zh.py\", line 7, in <module>\n raise ImportError('ir-datasets package missing; `pip install ir-datasets`')\n",
"ImportError: ir-datasets package missing; `pip install ir-datasets`\n"
]
}
```
It should have been deleted:
- by the webhook received from the Hub that tells that the dataset viewer has been disabled (hmmm: I think we don't receive that, in fact)
- by the daily backfill job (hmmm, I think we don't check if the dataset viewer is still enabled) | closed | 2023-10-05T08:55:01Z | 2024-02-02T12:35:48Z | 2024-02-02T12:35:48Z | severo |
1,927,694,710 | Compute stats for new column types | See https://github.com/huggingface/moon-landing/issues/7649 (internal)
Current supported column types:
- [ ] primitives:
- [x] all the numbers
- [x] string,
- [x] bool,
- [ ] null,
- [ ] timestamp,
- [ ] duration,
- [x] large_string,
- [ ] binary
- [x] Image
- [x] Audio
- [ ] Translation,
- [ ] TranslationVariableLanguages
- [x] Sequence,
- [x] Array,
- [ ] Array2D,
- [ ] Array3D,
- [ ] Array4D
- [ ] dict
I would appreciate a double-check on the previous inventory. I possibly missed some types.
#1929 might be required before adding new types. | Compute stats for new column types: See https://github.com/huggingface/moon-landing/issues/7649 (internal)
Current supported column types:
- [ ] primitives:
- [x] all the numbers
- [x] string,
- [x] bool,
- [ ] null,
- [ ] timestamp,
- [ ] duration,
- [x] large_string,
- [ ] binary
- [x] Image
- [x] Audio
- [ ] Translation,
- [ ] TranslationVariableLanguages
- [x] Sequence,
- [x] Array,
- [ ] Array2D,
- [ ] Array3D,
- [ ] Array4D
- [ ] dict
I would appreciate a double-check on the previous inventory. I possibly missed some types.
#1929 might be required before adding new types. | open | 2023-10-05T08:28:56Z | 2024-06-19T14:27:01Z | null | severo |
1,927,686,867 | Add a "feature" or "column" level for better granularity | For example, if we support statistics for a new type of columns, or if we change the way we compute some stats, I think that we don't want to recompute the stats for all the columns, just for one of them.
It's a guess, because maybe it's more efficient to have one job that downloads the data and computes every possible stats, than having N jobs that download the same data and compute only one stat. To be evaluated | Add a "feature" or "column" level for better granularity: For example, if we support statistics for a new type of columns, or if we change the way we compute some stats, I think that we don't want to recompute the stats for all the columns, just for one of them.
It's a guess, because maybe it's more efficient to have one job that downloads the data and computes every possible stats, than having N jobs that download the same data and compute only one stat. To be evaluated | closed | 2023-10-05T08:24:50Z | 2024-02-22T21:24:09Z | 2024-02-22T21:24:09Z | severo |
1,926,710,153 | assets to s3 | Moving assets to s3.
Second part of https://github.com/huggingface/datasets-server/issues/1406 | assets to s3: Moving assets to s3.
Second part of https://github.com/huggingface/datasets-server/issues/1406 | closed | 2023-10-04T17:52:49Z | 2023-10-06T13:33:10Z | 2023-10-06T12:59:59Z | AndreaFrancis |
1,926,376,272 | fix(chart): add missing job tolerations for karpenter | null | fix(chart): add missing job tolerations for karpenter: | closed | 2023-10-04T14:42:53Z | 2023-10-04T20:49:46Z | 2023-10-04T20:49:45Z | rtrompier |
1,926,204,775 | feat(chart): use spot instance in staging env | Try to use karpenter in staging env for a better optimisation of our resources consumptions. | feat(chart): use spot instance in staging env: Try to use karpenter in staging env for a better optimisation of our resources consumptions. | closed | 2023-10-04T13:19:19Z | 2023-10-04T14:15:42Z | 2023-10-04T14:15:41Z | rtrompier |
1,926,194,726 | feat: 🎸 update openapi.json spec | after #1900
| feat: 🎸 update openapi.json spec: after #1900
| closed | 2023-10-04T13:14:13Z | 2023-10-04T14:23:23Z | 2023-10-04T14:22:44Z | severo |
1,926,055,620 | Increase heavy workers ram | With 29GB it still OOMs on some mC4 configs like `af` | Increase heavy workers ram: With 29GB it still OOMs on some mC4 configs like `af` | closed | 2023-10-04T11:59:50Z | 2023-10-04T12:06:10Z | 2023-10-04T12:06:09Z | lhoestq |
1,925,873,012 | More CPU for search | This should finally give a decent speed for search (aiming for <15sec on big datasets).
I'll first try it like this and will adapt the other parameters in a second step (ram, num workers)
cc @rtrompier @AndreaFrancis | More CPU for search: This should finally give a decent speed for search (aiming for <15sec on big datasets).
I'll first try it like this and will adapt the other parameters in a second step (ram, num workers)
cc @rtrompier @AndreaFrancis | closed | 2023-10-04T10:18:31Z | 2023-10-04T11:49:04Z | 2023-10-04T11:49:03Z | lhoestq |
1,925,642,747 | fix: 🐛 fix vulnerability (pillow) | see https://github.com/huggingface/datasets-server/security/dependabot/272
also update libcommon in front/admin | fix: 🐛 fix vulnerability (pillow): see https://github.com/huggingface/datasets-server/security/dependabot/272
also update libcommon in front/admin | closed | 2023-10-04T08:08:53Z | 2023-10-04T08:24:23Z | 2023-10-04T08:24:22Z | severo |
1,925,642,007 | Refactor FIRST_ROWS_MAX_NUMBER and MAX_NUM_ROWS_PER_PAGE | As reported by @severo (see: https://github.com/huggingface/datasets-server/pull/1418#discussion_r1338867209):
- Should we handle `MAX_NUM_ROWS_PER_PAGE` with `FIRST_ROWS_MAX_NUMBER`? Or hardcode `FIRST_ROWS_MAX_NUMBER` and use `MAX_NUM_ROWS_PER_PAGE` instead (I think so)
- but we set `FIRST_ROWS_MAX_NUMBER` to 4 in the tests...
Also note that `MAX_NUM_ROWS_PER_PAGE` was already used in "/rows" and "/search", besides "/filter" once #1418 merged. | Refactor FIRST_ROWS_MAX_NUMBER and MAX_NUM_ROWS_PER_PAGE: As reported by @severo (see: https://github.com/huggingface/datasets-server/pull/1418#discussion_r1338867209):
- Should we handle `MAX_NUM_ROWS_PER_PAGE` with `FIRST_ROWS_MAX_NUMBER`? Or hardcode `FIRST_ROWS_MAX_NUMBER` and use `MAX_NUM_ROWS_PER_PAGE` instead (I think so)
- but we set `FIRST_ROWS_MAX_NUMBER` to 4 in the tests...
Also note that `MAX_NUM_ROWS_PER_PAGE` was already used in "/rows" and "/search", besides "/filter" once #1418 merged. | closed | 2023-10-04T08:08:26Z | 2024-02-13T10:04:19Z | 2024-02-13T10:04:19Z | albertvillanova |
1,924,514,625 | Values of `__hf_index_id` in DuckDB indexes are not in the right order | For example the first item in de C4 dataset is supposed to be
```
Beginners BBQ Class Taking Place in Missoula!...
```
This is also the first example when querying the DuckDB index, but it has `__hf_index_id == 1000`
```python
con.sql("select * from data limit 5;")
```
```
┌───────────────┬──────────────────────┬──────────────────────┬────────────────────────────────────────────────────────┐
│ __hf_index_id │ text │ timestamp │ url │
│ int64 │ varchar │ varchar │ varchar │
├───────────────┼──────────────────────┼──────────────────────┼────────────────────────────────────────────────────────┤
│ 1000 │ Beginners BBQ Clas… │ 2019-04-25T12:57:54Z │ https://klyq.com/beginners-bbq-class-taking-place-in… │
│ 1001 │ Discussion in 'Mac… │ 2019-04-21T10:07:13Z │ https://forums.macrumors.com/threads/restore-from-la… │
│ 1002 │ Foil plaid lycra a… │ 2019-04-25T10:40:23Z │ https://awishcometrue.com/Catalogs/Clearance/Tweens/… │
│ 1003 │ How many backlinks… │ 2019-04-21T12:46:19Z │ https://www.blackhatworld.com/seo/how-many-backlinks… │
│ 1004 │ The Denver Board o… │ 2019-04-20T14:33:21Z │ http://bond.dpsk12.org/category/news/ │
└───────────────┴──────────────────────┴──────────────────────┴────────────────────────────────────────────────────────┘
```
```python
con.sql("select * from data where __hf_index_id < 5;")
```
```
┌───────────────┬──────────────────────┬──────────────────────┬────────────────────────────────────────────────────────┐
│ __hf_index_id │ text │ timestamp │ url │
│ int64 │ varchar │ varchar │ varchar │
├───────────────┼──────────────────────┼──────────────────────┼────────────────────────────────────────────────────────┤
│ 0 │ Eimhir Ltd, its af… │ 2019-04-24T10:02:25Z │ https://www.game-pal.com/signup │
│ 1 │ Roger & Gallet Thé… │ 2019-04-21T06:22:38Z │ https://www.farmaline.uk/health/order/roger-gallet-t… │
│ 2 │ Folio: 166v - Feas… │ 2019-04-19T17:07:24Z │ http://cantus.uwaterloo.ca/chant/481394 │
│ 3 │ DIY Co Sleeper For… │ 2019-04-21T16:15:08Z │ https://www.musely.com/tips/DIY-Co-Sleeper-For-Baby-… │
│ 4 │ Rep. Ilhan Omar (D… │ 2019-04-26T07:55:58Z │ http://theminnesotasun.com/2019/01/13/ilhan-omar-dec… │
└───────────────┴──────────────────────┴──────────────────────┴────────────────────────────────────────────────────────┘
```
This would show the wrong row idx when clicking on the example after search (which times out atm though)
Not a big deal for now imo | Values of `__hf_index_id` in DuckDB indexes are not in the right order: For example the first item in de C4 dataset is supposed to be
```
Beginners BBQ Class Taking Place in Missoula!...
```
This is also the first example when querying the DuckDB index, but it has `__hf_index_id == 1000`
```python
con.sql("select * from data limit 5;")
```
```
┌───────────────┬──────────────────────┬──────────────────────┬────────────────────────────────────────────────────────┐
│ __hf_index_id │ text │ timestamp │ url │
│ int64 │ varchar │ varchar │ varchar │
├───────────────┼──────────────────────┼──────────────────────┼────────────────────────────────────────────────────────┤
│ 1000 │ Beginners BBQ Clas… │ 2019-04-25T12:57:54Z │ https://klyq.com/beginners-bbq-class-taking-place-in… │
│ 1001 │ Discussion in 'Mac… │ 2019-04-21T10:07:13Z │ https://forums.macrumors.com/threads/restore-from-la… │
│ 1002 │ Foil plaid lycra a… │ 2019-04-25T10:40:23Z │ https://awishcometrue.com/Catalogs/Clearance/Tweens/… │
│ 1003 │ How many backlinks… │ 2019-04-21T12:46:19Z │ https://www.blackhatworld.com/seo/how-many-backlinks… │
│ 1004 │ The Denver Board o… │ 2019-04-20T14:33:21Z │ http://bond.dpsk12.org/category/news/ │
└───────────────┴──────────────────────┴──────────────────────┴────────────────────────────────────────────────────────┘
```
```python
con.sql("select * from data where __hf_index_id < 5;")
```
```
┌───────────────┬──────────────────────┬──────────────────────┬────────────────────────────────────────────────────────┐
│ __hf_index_id │ text │ timestamp │ url │
│ int64 │ varchar │ varchar │ varchar │
├───────────────┼──────────────────────┼──────────────────────┼────────────────────────────────────────────────────────┤
│ 0 │ Eimhir Ltd, its af… │ 2019-04-24T10:02:25Z │ https://www.game-pal.com/signup │
│ 1 │ Roger & Gallet Thé… │ 2019-04-21T06:22:38Z │ https://www.farmaline.uk/health/order/roger-gallet-t… │
│ 2 │ Folio: 166v - Feas… │ 2019-04-19T17:07:24Z │ http://cantus.uwaterloo.ca/chant/481394 │
│ 3 │ DIY Co Sleeper For… │ 2019-04-21T16:15:08Z │ https://www.musely.com/tips/DIY-Co-Sleeper-For-Baby-… │
│ 4 │ Rep. Ilhan Omar (D… │ 2019-04-26T07:55:58Z │ http://theminnesotasun.com/2019/01/13/ilhan-omar-dec… │
└───────────────┴──────────────────────┴──────────────────────┴────────────────────────────────────────────────────────┘
```
This would show the wrong row idx when clicking on the example after search (which times out atm though)
Not a big deal for now imo | closed | 2023-10-03T16:29:28Z | 2023-11-30T16:13:54Z | 2023-11-30T16:13:54Z | lhoestq |
1,924,455,521 | Add difficulty bonus in admin app | null | Add difficulty bonus in admin app: | closed | 2023-10-03T15:59:12Z | 2023-10-03T21:24:58Z | 2023-10-03T21:24:57Z | lhoestq |
1,924,158,907 | Storage disk metrics are not real-time | There is a lag of up to ~24h for some reason.
I noticed that when running long deletion jobs on the datasets cache

When running the same python code as the one used to send the metric to prometheus I was able to get the right values.
Maybe the metrics refreshes are not frequent enough ? | Storage disk metrics are not real-time: There is a lag of up to ~24h for some reason.
I noticed that when running long deletion jobs on the datasets cache

When running the same python code as the one used to send the metric to prometheus I was able to get the right values.
Maybe the metrics refreshes are not frequent enough ? | closed | 2023-10-03T13:29:37Z | 2024-06-19T14:26:10Z | 2024-06-19T14:26:10Z | lhoestq |
1,924,121,486 | feat: 🎸 support two formats for "sub" in the JWT | asked here: https://github.com/huggingface/moon-landing/pull/7644/files#r1344032256 (internal) | feat: 🎸 support two formats for "sub" in the JWT: asked here: https://github.com/huggingface/moon-landing/pull/7644/files#r1344032256 (internal) | closed | 2023-10-03T13:09:51Z | 2023-10-03T13:44:46Z | 2023-10-03T13:44:45Z | severo |
1,924,071,645 | Add heavy workers | Following https://github.com/huggingface/datasets-server/pull/1903
Close https://github.com/huggingface/datasets-server/issues/1891 | Add heavy workers: Following https://github.com/huggingface/datasets-server/pull/1903
Close https://github.com/huggingface/datasets-server/issues/1891 | closed | 2023-10-03T12:45:41Z | 2023-10-03T12:49:22Z | 2023-10-03T12:49:21Z | lhoestq |
1,924,055,859 | Fix audio assets from streaming | Some audio files were not correctly created when the data came from streaming and is decoded (i.e. is a numpy array representing the audio sample)
Fix https://github.com/huggingface/datasets-server/issues/1912
Bug was introduced in https://github.com/huggingface/datasets-server/pull/1788
Will have to delete `split-first-rows-from-streaming` cache responses with http_status 200 and audio features since that date | Fix audio assets from streaming: Some audio files were not correctly created when the data came from streaming and is decoded (i.e. is a numpy array representing the audio sample)
Fix https://github.com/huggingface/datasets-server/issues/1912
Bug was introduced in https://github.com/huggingface/datasets-server/pull/1788
Will have to delete `split-first-rows-from-streaming` cache responses with http_status 200 and audio features since that date | closed | 2023-10-03T12:36:47Z | 2023-10-04T15:42:52Z | 2023-10-03T15:29:24Z | lhoestq |
1,923,882,815 | upgrade duckdb to >=0.9 and pandas to >=2.1.1 | See previous intent here: https://github.com/huggingface/datasets-server/pull/1827
If we apply it, we must recreate the duckdb files created with duckdb <= 0.8 | upgrade duckdb to >=0.9 and pandas to >=2.1.1: See previous intent here: https://github.com/huggingface/datasets-server/pull/1827
If we apply it, we must recreate the duckdb files created with duckdb <= 0.8 | closed | 2023-10-03T11:05:18Z | 2024-03-08T12:01:05Z | 2024-03-08T12:01:05Z | severo |
1,923,871,949 | feat: 🎸 reduce the number of workers | null | feat: 🎸 reduce the number of workers: | closed | 2023-10-03T10:58:17Z | 2023-10-03T11:00:40Z | 2023-10-03T11:00:39Z | severo |
1,923,780,985 | Some audio files in /first-rows are 0 bytes | Some audio files in /first-rows are 0 bytes, e.g.
https://datasets-server.huggingface.co/assets/ccmusic-database/acapella_eval/--/default/song1/0/song/audio.wav | Some audio files in /first-rows are 0 bytes: Some audio files in /first-rows are 0 bytes, e.g.
https://datasets-server.huggingface.co/assets/ccmusic-database/acapella_eval/--/default/song1/0/song/audio.wav | closed | 2023-10-03T10:06:41Z | 2023-10-03T15:29:26Z | 2023-10-03T15:29:25Z | lhoestq |
1,923,741,990 | Revert "Update pandas to 2.1.1 and duckdb to 0.9.0 (#1827)" | This reverts commit 93bcf55fafe4bc23d3dd7dc75dc7c8ff216f7df0.
Fix #1910.
Note that we cannot update `pandas` patch release 2.1.1 if we do not update `duckdb`. See discussion in: https://github.com/huggingface/datasets-server/pull/1827#issuecomment-1729090034
| Revert "Update pandas to 2.1.1 and duckdb to 0.9.0 (#1827)": This reverts commit 93bcf55fafe4bc23d3dd7dc75dc7c8ff216f7df0.
Fix #1910.
Note that we cannot update `pandas` patch release 2.1.1 if we do not update `duckdb`. See discussion in: https://github.com/huggingface/datasets-server/pull/1827#issuecomment-1729090034
| closed | 2023-10-03T09:45:12Z | 2023-10-03T10:22:42Z | 2023-10-03T10:22:41Z | albertvillanova |
1,923,739,278 | Revert update of duckdb | Revert update of duckdb because indexes computed with prior version are not compatible with the latest version:
- #1827
If we eventually update duckdb, we will need to recompute all the indexes.
Reported by @lhoestq | Revert update of duckdb: Revert update of duckdb because indexes computed with prior version are not compatible with the latest version:
- #1827
If we eventually update duckdb, we will need to recompute all the indexes.
Reported by @lhoestq | closed | 2023-10-03T09:43:34Z | 2023-10-03T10:22:42Z | 2023-10-03T10:22:42Z | albertvillanova |
1,923,622,154 | Fix the cache metrics | <img width="381" alt="Capture d’écran 2023-10-03 à 10 34 02" src="https://github.com/huggingface/datasets-server/assets/1676121/9ed30ffc-f44b-4f45-b673-160decfeed34">
1. They are fluctuating. Maybe due to terminated uvicorn workers (see https://github.com/huggingface/datasets-server/issues/889)? Or to how we aggregate the metrics in Grafana (we take the median of all reports: `quantile by(kind) (0.5, sum by (pid, kind) (responses_in_cache_total))`)
2. We should have about the same number of entries for each split-level step, config-level step, and dataset-level step, but that's not the case. | Fix the cache metrics: <img width="381" alt="Capture d’écran 2023-10-03 à 10 34 02" src="https://github.com/huggingface/datasets-server/assets/1676121/9ed30ffc-f44b-4f45-b673-160decfeed34">
1. They are fluctuating. Maybe due to terminated uvicorn workers (see https://github.com/huggingface/datasets-server/issues/889)? Or to how we aggregate the metrics in Grafana (we take the median of all reports: `quantile by(kind) (0.5, sum by (pid, kind) (responses_in_cache_total))`)
2. We should have about the same number of entries for each split-level step, config-level step, and dataset-level step, but that's not the case. | closed | 2023-10-03T08:41:06Z | 2024-02-23T10:28:01Z | 2024-02-23T10:28:01Z | severo |
1,923,540,500 | Test get_previous_step_or_raise function | Currently, we do not test the function `get_previous_step_or_raise` (in `libcommon.simple_cache`).
We need to implement some tests of it, as discussed in: https://github.com/huggingface/datasets-server/pull/1418#discussion_r1240000135 | Test get_previous_step_or_raise function: Currently, we do not test the function `get_previous_step_or_raise` (in `libcommon.simple_cache`).
We need to implement some tests of it, as discussed in: https://github.com/huggingface/datasets-server/pull/1418#discussion_r1240000135 | open | 2023-10-03T07:57:20Z | 2023-11-02T16:43:55Z | null | albertvillanova |
1,923,526,356 | CI is broken due to vulnerability in urllib3 1.26.16 | See: https://github.com/huggingface/datasets-server/actions/runs/6389729101/job/17341620966
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
------- ------- ------------------- -------------
urllib3 1.26.16 GHSA-v845-jxx5-vc9f 1.26.17,2.0.6
``` | CI is broken due to vulnerability in urllib3 1.26.16: See: https://github.com/huggingface/datasets-server/actions/runs/6389729101/job/17341620966
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
------- ------- ------------------- -------------
urllib3 1.26.16 GHSA-v845-jxx5-vc9f 1.26.17,2.0.6
``` | closed | 2023-10-03T07:50:00Z | 2023-10-03T09:48:21Z | 2023-10-03T09:48:21Z | albertvillanova |
1,923,512,525 | Update urllib3 to 1.26.17 to fix vulnerability | This should fix 12 dependabot alerts.
Fix #1907. | Update urllib3 to 1.26.17 to fix vulnerability: This should fix 12 dependabot alerts.
Fix #1907. | closed | 2023-10-03T07:40:51Z | 2023-10-03T09:48:21Z | 2023-10-03T09:48:20Z | albertvillanova |
1,923,233,663 | build(deps-dev): bump urllib3 from 1.26.16 to 1.26.17 in /libs/libcommon | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.16 to 1.26.17.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.17</h2>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.17 (2023-10-02)</h1>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) <https://github.com/urllib3/urllib3/pull/3139></code>_)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li>
<li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li>
<li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.16...1.26.17">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | build(deps-dev): bump urllib3 from 1.26.16 to 1.26.17 in /libs/libcommon: Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.16 to 1.26.17.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.17</h2>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.17 (2023-10-02)</h1>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) <https://github.com/urllib3/urllib3/pull/3139></code>_)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li>
<li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li>
<li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.16...1.26.17">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-10-03T03:44:01Z | 2023-10-03T10:23:31Z | 2023-10-03T10:23:29Z | dependabot[bot] |
1,923,233,361 | build(deps-dev): bump urllib3 from 1.26.16 to 1.26.17 in /e2e | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.16 to 1.26.17.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.17</h2>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.17 (2023-10-02)</h1>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) <https://github.com/urllib3/urllib3/pull/3139></code>_)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li>
<li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li>
<li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.16...1.26.17">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | build(deps-dev): bump urllib3 from 1.26.16 to 1.26.17 in /e2e: Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.16 to 1.26.17.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.17</h2>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.17 (2023-10-02)</h1>
<ul>
<li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) <https://github.com/urllib3/urllib3/pull/3139></code>_)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li>
<li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li>
<li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.16...1.26.17">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/datasets-server/network/alerts).
</details> | closed | 2023-10-03T03:43:40Z | 2023-10-03T11:01:44Z | 2023-10-03T11:01:34Z | dependabot[bot] |
1,922,449,931 | Add bonus difficulty if dataset is big | Add a bonus difficulty of 20 for `duckdb-index` if the dataset is bigger than 3GB (from the config info).
This is defined in the orchestrator.
This can be used to defined "heavy workers" that would have enough RAM to process them (up to 29GB of memory for datasets like C4 that have their first 5GB converted to parquet)
The main changes are in
libs/libcommon/src/libcommon/config.py
libs/libcommon/src/libcommon/constants.py
libs/libcommon/src/libcommon/orchestrator.py
libs/libcommon/src/libcommon/processing_graph.py
TODO:
- [x] tests
Related to https://github.com/huggingface/datasets-server/issues/1891 | Add bonus difficulty if dataset is big: Add a bonus difficulty of 20 for `duckdb-index` if the dataset is bigger than 3GB (from the config info).
This is defined in the orchestrator.
This can be used to defined "heavy workers" that would have enough RAM to process them (up to 29GB of memory for datasets like C4 that have their first 5GB converted to parquet)
The main changes are in
libs/libcommon/src/libcommon/config.py
libs/libcommon/src/libcommon/constants.py
libs/libcommon/src/libcommon/orchestrator.py
libs/libcommon/src/libcommon/processing_graph.py
TODO:
- [x] tests
Related to https://github.com/huggingface/datasets-server/issues/1891 | closed | 2023-10-02T18:48:10Z | 2023-10-03T12:49:44Z | 2023-10-03T12:40:27Z | lhoestq |
1,922,327,284 | feat: 🎸 use a timezone-aware date in stale bot script | See https://github.com/huggingface/datasets-server/actions/runs/6381892186/job/17319474629 | feat: 🎸 use a timezone-aware date in stale bot script: See https://github.com/huggingface/datasets-server/actions/runs/6381892186/job/17319474629 | closed | 2023-10-02T17:25:30Z | 2023-10-02T17:31:50Z | 2023-10-02T17:31:49Z | severo |
1,922,313,507 | feat: 🎸 allow wildcard only in dataset name of blocked datasets | forbidden in namespace or canonical names. See
https://github.com/huggingface/datasets-server/pull/1899#pullrequestreview-1653142782 | feat: 🎸 allow wildcard only in dataset name of blocked datasets: forbidden in namespace or canonical names. See
https://github.com/huggingface/datasets-server/pull/1899#pullrequestreview-1653142782 | closed | 2023-10-02T17:15:12Z | 2023-10-02T17:42:56Z | 2023-10-02T17:42:55Z | severo |
1,922,159,165 | Fix class label computation check and return counts for all labels, not only those found in split | There might be cases when the number of unique values found in a set is not equal to the number of classes predefined in the `ClassLabel` feature (for example, in test sets might be only `no label` values). So I check that the number of unique values is not greater than the number of classes stored in a feature instead of checking their equality as before.
**Also return all classes and their counts in response, not only those found (to make different splits aligned - but for strings-classes the behavior is different because we don't have predefined classes there).**
For example, for [`glue`](https://huggingface.co/datasets/glue), `ax` config, `test` set the result will be:
```python
{
"column_name": "label",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"no_label_count": 1104,
"no_label_proportion": 1.0,
"n_unique": 3,
"frequencies": { # without this change it would be an empty dict
"entailment": 0,
"neutral": 0,
"contradiction": 0
}
}
}
``` | Fix class label computation check and return counts for all labels, not only those found in split: There might be cases when the number of unique values found in a set is not equal to the number of classes predefined in the `ClassLabel` feature (for example, in test sets might be only `no label` values). So I check that the number of unique values is not greater than the number of classes stored in a feature instead of checking their equality as before.
**Also return all classes and their counts in response, not only those found (to make different splits aligned - but for strings-classes the behavior is different because we don't have predefined classes there).**
For example, for [`glue`](https://huggingface.co/datasets/glue), `ax` config, `test` set the result will be:
```python
{
"column_name": "label",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"no_label_count": 1104,
"no_label_proportion": 1.0,
"n_unique": 3,
"frequencies": { # without this change it would be an empty dict
"entailment": 0,
"neutral": 0,
"contradiction": 0
}
}
}
``` | closed | 2023-10-02T15:35:24Z | 2023-10-02T16:54:43Z | 2023-10-02T16:54:26Z | polinaeterna |
1,921,893,209 | feat: 🎸 delete the dataset and raise if blocked | in libcommon, not only in the workers.
Fixes #1897 | feat: 🎸 delete the dataset and raise if blocked: in libcommon, not only in the workers.
Fixes #1897 | closed | 2023-10-02T13:10:42Z | 2023-10-02T17:15:47Z | 2023-10-02T16:53:54Z | severo |
1,921,665,535 | Delete unnecessary condition on expected_error_code | As reported by @AndreaFrancis (see: https://github.com/huggingface/datasets-server/pull/1418#discussion_r1340621923), the condition on `expected_error_code` can be removed because it equals `None`. | Delete unnecessary condition on expected_error_code: As reported by @AndreaFrancis (see: https://github.com/huggingface/datasets-server/pull/1418#discussion_r1340621923), the condition on `expected_error_code` can be removed because it equals `None`. | closed | 2023-10-02T10:41:32Z | 2023-10-02T14:06:38Z | 2023-10-02T14:06:37Z | albertvillanova |
1,921,467,839 | Reduce amount of jobs for blocked datasets | Currently most of the jobs are for datasets inside the `open-llm-leaderboard` org, which are all blocked
https://github.com/huggingface/datasets-server/blob/f77e375a49503693c2537c7a4b35229551b37b95/chart/env/prod.yaml#L149
We could directly set the value for all the steps to "blocked" error?
Also: currently the blocked list is for `config-parquet-and-info`. We might want to apply it to the whole DAG. | Reduce amount of jobs for blocked datasets: Currently most of the jobs are for datasets inside the `open-llm-leaderboard` org, which are all blocked
https://github.com/huggingface/datasets-server/blob/f77e375a49503693c2537c7a4b35229551b37b95/chart/env/prod.yaml#L149
We could directly set the value for all the steps to "blocked" error?
Also: currently the blocked list is for `config-parquet-and-info`. We might want to apply it to the whole DAG. | closed | 2023-10-02T08:26:51Z | 2023-10-02T16:53:55Z | 2023-10-02T16:53:55Z | severo |
1,919,557,688 | feat: 🎸 set "truncated" field of /first-rows as mandatory | (we migrated all the entries in the prod database) | feat: 🎸 set "truncated" field of /first-rows as mandatory: (we migrated all the entries in the prod database) | closed | 2023-09-29T16:10:24Z | 2023-09-29T16:23:08Z | 2023-09-29T16:22:35Z | severo |
1,919,490,053 | e2e tests fail with `RuntimeError: Poll timeout` after 2+ hours of running | Started after [this commit](https://github.com/huggingface/datasets-server/commit/3c6c79b6184295d52518a7499cbfe1616077a753) but doesn't seem to be related to it?
```
2023-09-28T20:39:19.0725016Z =========================== short test summary info ============================
2023-09-28T20:39:19.0725373Z FAILED tests/test_11_api.py::test_auth_e2e[public-none-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0725700Z FAILED tests/test_11_api.py::test_auth_e2e[public-token-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0726149Z FAILED tests/test_11_api.py::test_auth_e2e[public-cookie-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0726464Z FAILED tests/test_11_api.py::test_auth_e2e[gated-token-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0726782Z FAILED tests/test_11_api.py::test_auth_e2e[gated-cookie-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0727091Z FAILED tests/test_11_api.py::test_endpoint[/splits-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0727511Z FAILED tests/test_11_api.py::test_endpoint[/splits-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0727836Z FAILED tests/test_11_api.py::test_endpoint[/first-rows-split] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0728146Z FAILED tests/test_11_api.py::test_endpoint[/parquet-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0728454Z FAILED tests/test_11_api.py::test_endpoint[/parquet-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0728758Z FAILED tests/test_11_api.py::test_endpoint[/info-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729056Z FAILED tests/test_11_api.py::test_endpoint[/info-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729355Z FAILED tests/test_11_api.py::test_endpoint[/size-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729650Z FAILED tests/test_11_api.py::test_endpoint[/size-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729965Z FAILED tests/test_11_api.py::test_endpoint[/is-valid-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0730280Z FAILED tests/test_11_api.py::test_endpoint[/statistics-split] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0730563Z FAILED tests/test_11_api.py::test_rows_endpoint - RuntimeError: Poll timeout
2023-09-28T20:39:19.0730876Z FAILED tests/test_14_statistics.py::test_statistics_endpoint - RuntimeError: Poll timeout
2023-09-28T20:39:19.0739418Z FAILED tests/test_31_admin_metrics.py::test_metrics - AssertionError: queue_jobs_total - queue=dataset-config-names found in {'starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="12"}': 0.0, 'starlette_requests_in_progress{method="GET",path_template="/metrics",pid="12"}': 1.0, 'starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="13"}': 0.0, 'starlette_requests_processing_time_seconds_sum{method="GET",path_template="/healthcheck"}': 0.0014055180005243528, 'starlette_requests_processing_time_seconds_bucket{le="0.005",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.01",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.025",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.05",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.075",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.1",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.25",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.5",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.75",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="1.0",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="2.5",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="5.0",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="7.5",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="10.0",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="+Inf",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_count{method="GET",path_template="/healthcheck"}': 2.0, 'queue_jobs_total{pid="12",queue="dataset-config-names",status="waiting"}': 2.0, 'assets_disk_usage{pid="12",type="total"}': 89297309696.0, 'assets_disk_usage{pid="12",type="used"}': 82270883840.0, 'assets_disk_usage{pid="12",type="free"}': 7009648640.0, 'assets_disk_usage{pid="12",type="percent"}': 92.1, 'descriptive_statistics_disk_usage{pid="12",type="total"}': 89297309696.0, 'descriptive_statistics_disk_usage{pid="12",type="used"}': 82270883840.0, 'descriptive_statistics_disk_usage{pid="12",type="free"}': 7009648640.0, 'descriptive_statistics_disk_usage{pid="12",type="percent"}': 92.1, 'duckdb_disk_usage{pid="12",type="total"}': 89297309696.0, 'duckdb_disk_usage{pid="12",type="used"}': 82270883840.0, 'duckdb_disk_usage{pid="12",type="free"}': 7009648640.0, 'duckdb_disk_usage{pid="12",type="percent"}': 92.1, 'hf_datasets_disk_usage{pid="12",type="total"}': 89297309696.0, 'hf_datasets_disk_usage{pid="12",type="used"}': 82270883840.0, 'hf_datasets_disk_usage{pid="12",type="free"}': 7009648640.0, 'hf_datasets_disk_usage{pid="12",type="percent"}': 92.1, 'parquet_metadata_disk_usage{pid="12",type="total"}': 89297309696.0, 'parquet_metadata_disk_usage{pid="12",type="used"}': 82270883840.0, 'parquet_metadata_disk_usage{pid="12",type="free"}': 7009648640.0, 'parquet_metadata_disk_usage{pid="12",type="percent"}': 92.1, 'starlette_requests_total{method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_total{method="GET",path_template="/metrics"}': 1.0, 'starlette_responses_total{method="GET",path_template="/healthcheck",status_code="200"}': 2.0}
2023-09-28T20:39:19.0739643Z assert False
2023-09-28T20:39:19.0740723Z + where False = has_metric(name='queue_jobs_total', labels={'pid': '[0-9]*', 'queue': 'dataset-config-names', 'status': 'started'}, metric_names={'assets_disk_usage{pid="12",type="free"}', 'assets_disk_usage{pid="12",type="percent"}', 'assets_disk_usage{pid="12",type="total"}', 'assets_disk_usage{pid="12",type="used"}', 'descriptive_statistics_disk_usage{pid="12",type="free"}', 'descriptive_statistics_disk_usage{pid="12",type="percent"}', ...})
2023-09-28T20:39:19.0741032Z FAILED tests/test_52_search.py::test_search_endpoint - RuntimeError: Poll timeout
2023-09-28T20:39:19.0741198Z ================= 20 failed, 27 passed in 12879.59s (3:34:39) ==================
``` | e2e tests fail with `RuntimeError: Poll timeout` after 2+ hours of running: Started after [this commit](https://github.com/huggingface/datasets-server/commit/3c6c79b6184295d52518a7499cbfe1616077a753) but doesn't seem to be related to it?
```
2023-09-28T20:39:19.0725016Z =========================== short test summary info ============================
2023-09-28T20:39:19.0725373Z FAILED tests/test_11_api.py::test_auth_e2e[public-none-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0725700Z FAILED tests/test_11_api.py::test_auth_e2e[public-token-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0726149Z FAILED tests/test_11_api.py::test_auth_e2e[public-cookie-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0726464Z FAILED tests/test_11_api.py::test_auth_e2e[gated-token-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0726782Z FAILED tests/test_11_api.py::test_auth_e2e[gated-cookie-200-None] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0727091Z FAILED tests/test_11_api.py::test_endpoint[/splits-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0727511Z FAILED tests/test_11_api.py::test_endpoint[/splits-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0727836Z FAILED tests/test_11_api.py::test_endpoint[/first-rows-split] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0728146Z FAILED tests/test_11_api.py::test_endpoint[/parquet-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0728454Z FAILED tests/test_11_api.py::test_endpoint[/parquet-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0728758Z FAILED tests/test_11_api.py::test_endpoint[/info-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729056Z FAILED tests/test_11_api.py::test_endpoint[/info-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729355Z FAILED tests/test_11_api.py::test_endpoint[/size-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729650Z FAILED tests/test_11_api.py::test_endpoint[/size-config] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0729965Z FAILED tests/test_11_api.py::test_endpoint[/is-valid-dataset] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0730280Z FAILED tests/test_11_api.py::test_endpoint[/statistics-split] - RuntimeError: Poll timeout
2023-09-28T20:39:19.0730563Z FAILED tests/test_11_api.py::test_rows_endpoint - RuntimeError: Poll timeout
2023-09-28T20:39:19.0730876Z FAILED tests/test_14_statistics.py::test_statistics_endpoint - RuntimeError: Poll timeout
2023-09-28T20:39:19.0739418Z FAILED tests/test_31_admin_metrics.py::test_metrics - AssertionError: queue_jobs_total - queue=dataset-config-names found in {'starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="12"}': 0.0, 'starlette_requests_in_progress{method="GET",path_template="/metrics",pid="12"}': 1.0, 'starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="13"}': 0.0, 'starlette_requests_processing_time_seconds_sum{method="GET",path_template="/healthcheck"}': 0.0014055180005243528, 'starlette_requests_processing_time_seconds_bucket{le="0.005",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.01",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.025",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.05",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.075",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.1",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.25",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.5",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.75",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="1.0",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="2.5",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="5.0",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="7.5",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="10.0",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="+Inf",method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_processing_time_seconds_count{method="GET",path_template="/healthcheck"}': 2.0, 'queue_jobs_total{pid="12",queue="dataset-config-names",status="waiting"}': 2.0, 'assets_disk_usage{pid="12",type="total"}': 89297309696.0, 'assets_disk_usage{pid="12",type="used"}': 82270883840.0, 'assets_disk_usage{pid="12",type="free"}': 7009648640.0, 'assets_disk_usage{pid="12",type="percent"}': 92.1, 'descriptive_statistics_disk_usage{pid="12",type="total"}': 89297309696.0, 'descriptive_statistics_disk_usage{pid="12",type="used"}': 82270883840.0, 'descriptive_statistics_disk_usage{pid="12",type="free"}': 7009648640.0, 'descriptive_statistics_disk_usage{pid="12",type="percent"}': 92.1, 'duckdb_disk_usage{pid="12",type="total"}': 89297309696.0, 'duckdb_disk_usage{pid="12",type="used"}': 82270883840.0, 'duckdb_disk_usage{pid="12",type="free"}': 7009648640.0, 'duckdb_disk_usage{pid="12",type="percent"}': 92.1, 'hf_datasets_disk_usage{pid="12",type="total"}': 89297309696.0, 'hf_datasets_disk_usage{pid="12",type="used"}': 82270883840.0, 'hf_datasets_disk_usage{pid="12",type="free"}': 7009648640.0, 'hf_datasets_disk_usage{pid="12",type="percent"}': 92.1, 'parquet_metadata_disk_usage{pid="12",type="total"}': 89297309696.0, 'parquet_metadata_disk_usage{pid="12",type="used"}': 82270883840.0, 'parquet_metadata_disk_usage{pid="12",type="free"}': 7009648640.0, 'parquet_metadata_disk_usage{pid="12",type="percent"}': 92.1, 'starlette_requests_total{method="GET",path_template="/healthcheck"}': 2.0, 'starlette_requests_total{method="GET",path_template="/metrics"}': 1.0, 'starlette_responses_total{method="GET",path_template="/healthcheck",status_code="200"}': 2.0}
2023-09-28T20:39:19.0739643Z assert False
2023-09-28T20:39:19.0740723Z + where False = has_metric(name='queue_jobs_total', labels={'pid': '[0-9]*', 'queue': 'dataset-config-names', 'status': 'started'}, metric_names={'assets_disk_usage{pid="12",type="free"}', 'assets_disk_usage{pid="12",type="percent"}', 'assets_disk_usage{pid="12",type="total"}', 'assets_disk_usage{pid="12",type="used"}', 'descriptive_statistics_disk_usage{pid="12",type="free"}', 'descriptive_statistics_disk_usage{pid="12",type="percent"}', ...})
2023-09-28T20:39:19.0741032Z FAILED tests/test_52_search.py::test_search_endpoint - RuntimeError: Poll timeout
2023-09-28T20:39:19.0741198Z ================= 20 failed, 27 passed in 12879.59s (3:34:39) ==================
``` | closed | 2023-09-29T15:27:28Z | 2024-02-06T15:42:15Z | 2024-02-06T15:42:14Z | polinaeterna |
1,919,441,163 | CloudFront error when deleting cached assets | When running the following line on a big dataset (`google/xtreme_s`), I got the following error:
https://github.com/huggingface/datasets-server/blob/e7a89be3c6bec9497e00302a10638dc0389f7561/services/admin/src/admin/routes/recreate_dataset.py#L69
```
504 ERROR
\n
The request could not be satisfied.
\n\nCloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection.\nWe can’t connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.\n
\nIf you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.\n
\n\n
\nGenerated by cloudfront (CloudFront)\nRequest ID: rfBdSqx9O8lmWHdey51ghOoeITA8quwIQ30hpCaCM1TUKdwYJkRoPg==\n
\n
\n
\n
``` | CloudFront error when deleting cached assets: When running the following line on a big dataset (`google/xtreme_s`), I got the following error:
https://github.com/huggingface/datasets-server/blob/e7a89be3c6bec9497e00302a10638dc0389f7561/services/admin/src/admin/routes/recreate_dataset.py#L69
```
504 ERROR
\n
The request could not be satisfied.
\n\nCloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection.\nWe can’t connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.\n
\nIf you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.\n
\n\n
\nGenerated by cloudfront (CloudFront)\nRequest ID: rfBdSqx9O8lmWHdey51ghOoeITA8quwIQ30hpCaCM1TUKdwYJkRoPg==\n
\n
\n
\n
``` | closed | 2023-09-29T15:05:31Z | 2023-09-29T21:57:54Z | 2023-09-29T16:13:43Z | severo |
1,919,383,070 | Fix zipped audio extension in streaming | The audio file extension was not inferred correctly when we stream the first rows.
Indeed in some cases the audio file path can be
```
zip://audio.wav::https://foo.bar/data.zip
```
fix https://huggingface.co/datasets/ccmusic-database/acapella_eval/discussions/3 | Fix zipped audio extension in streaming: The audio file extension was not inferred correctly when we stream the first rows.
Indeed in some cases the audio file path can be
```
zip://audio.wav::https://foo.bar/data.zip
```
fix https://huggingface.co/datasets/ccmusic-database/acapella_eval/discussions/3 | closed | 2023-09-29T14:33:17Z | 2023-09-29T15:03:42Z | 2023-09-29T15:03:41Z | lhoestq |
1,919,308,205 | Use swap to avoid OOM? | The pods don't have swap. Is it possible to have swap to avoid OOM, even at the expense of longer processing time in workers? | Use swap to avoid OOM?: The pods don't have swap. Is it possible to have swap to avoid OOM, even at the expense of longer processing time in workers? | closed | 2023-09-29T13:48:54Z | 2024-06-19T14:23:36Z | 2024-06-19T14:23:36Z | severo |
1,919,306,017 | Create "heavy" workers with a lot of RAM | Aside from light and normal workers.
To make the most efficient use of them, maybe rework how we assign the "difficulty" field of the jobs. Currently it only depends on the job type. Maybe we want another field to ask for more RAM, or another way to decide which type of worker should process the job. | Create "heavy" workers with a lot of RAM: Aside from light and normal workers.
To make the most efficient use of them, maybe rework how we assign the "difficulty" field of the jobs. Currently it only depends on the job type. Maybe we want another field to ask for more RAM, or another way to decide which type of worker should process the job. | closed | 2023-09-29T13:47:29Z | 2023-10-03T12:49:22Z | 2023-10-03T12:49:22Z | severo |
1,919,288,651 | ram | ram.
(following https://github.com/huggingface/datasets-server/pull/1883) | ram: ram.
(following https://github.com/huggingface/datasets-server/pull/1883) | closed | 2023-09-29T13:36:44Z | 2023-09-29T13:37:40Z | 2023-09-29T13:37:39Z | lhoestq |
1,919,140,169 | fix: remove file in cached-assets even if not uploaded | Local files remain in local cached-assets when they already exist on the bucket, we should remove them anyway.
I will improve the logic in another PR: create files ONLY when they don't exist on the bucket.
| fix: remove file in cached-assets even if not uploaded: Local files remain in local cached-assets when they already exist on the bucket, we should remove them anyway.
I will improve the logic in another PR: create files ONLY when they don't exist on the bucket.
| closed | 2023-09-29T11:55:57Z | 2023-09-29T13:46:24Z | 2023-09-29T13:46:24Z | AndreaFrancis |
1,919,135,486 | Fix lock release when finishing a job | See https://github.com/huggingface/datasets-server/pull/1884#discussion_r1341134145
the owner is now a UUID
https://github.com/huggingface/dataset-viewer/blob/3ab176bfc38ad533ed08ad8d015eaed0d26202d0/libs/libcommon/src/libcommon/queue/jobs.py#L702 | Fix lock release when finishing a job: See https://github.com/huggingface/datasets-server/pull/1884#discussion_r1341134145
the owner is now a UUID
https://github.com/huggingface/dataset-viewer/blob/3ab176bfc38ad533ed08ad8d015eaed0d26202d0/libs/libcommon/src/libcommon/queue/jobs.py#L702 | closed | 2023-09-29T11:52:24Z | 2024-08-22T09:48:28Z | 2024-08-22T09:48:28Z | severo |
1,919,091,472 | Abstract external storage | Features:
- use S3 in production and a docker volume in tests: https://github.com/huggingface/datasets-server/pull/1882#issuecomment-1740688443
- delete all the assets or temporary files associated with a cache response (https://github.com/huggingface/datasets-server/pull/1884#discussion_r1340643599)
| Abstract external storage: Features:
- use S3 in production and a docker volume in tests: https://github.com/huggingface/datasets-server/pull/1882#issuecomment-1740688443
- delete all the assets or temporary files associated with a cache response (https://github.com/huggingface/datasets-server/pull/1884#discussion_r1340643599)
| closed | 2023-09-29T11:21:33Z | 2023-10-31T14:15:44Z | 2023-10-31T14:15:44Z | severo |
1,918,857,648 | Audio is slow on /rows and /search and makes the dataset viewer timeout | See https://datasets-server.huggingface.co/rows?dataset=openslr/librispeech_asr&config=clean&split=train.100 for example
https://github.com/huggingface/moon-landing/pull/7579 (internal) will increase the timeout for requests from the Hub, but we find a way to increase the speed anyway. | Audio is slow on /rows and /search and makes the dataset viewer timeout: See https://datasets-server.huggingface.co/rows?dataset=openslr/librispeech_asr&config=clean&split=train.100 for example
https://github.com/huggingface/moon-landing/pull/7579 (internal) will increase the timeout for requests from the Hub, but we find a way to increase the speed anyway. | closed | 2023-09-29T08:44:57Z | 2024-07-30T16:06:37Z | 2024-07-30T16:06:37Z | severo |
1,918,214,415 | Remove obsolete code once cached-assets is fully migrated to S3 | Once we validate that cached-assets is working as expected on S3 instead of EFS, we can remove obsolete code.
For example clean_cached_assets from time to time in /rows and /search.
| Remove obsolete code once cached-assets is fully migrated to S3: Once we validate that cached-assets is working as expected on S3 instead of EFS, we can remove obsolete code.
For example clean_cached_assets from time to time in /rows and /search.
| closed | 2023-09-28T20:19:48Z | 2023-11-07T12:25:31Z | 2023-11-07T12:25:31Z | AndreaFrancis |
1,918,202,967 | feat: 🎸 add POST /admin/recreate-dataset?dataset=.&priority= | see
https://github.com/huggingface/datasets-server/issues/1823#issuecomment-1739117350
Missing:
- tests (it's destructive, so: better to ensure we don't affect other datasets for example)
- add a tab in the admin UI
| feat: 🎸 add POST /admin/recreate-dataset?dataset=.&priority=: see
https://github.com/huggingface/datasets-server/issues/1823#issuecomment-1739117350
Missing:
- tests (it's destructive, so: better to ensure we don't affect other datasets for example)
- add a tab in the admin UI
| closed | 2023-09-28T20:12:56Z | 2023-09-29T13:42:25Z | 2023-09-29T13:42:16Z | severo |
1,918,171,405 | Even more ram for worker | C4 indexing OOMed with 20GB, trying with 24GB
following https://github.com/huggingface/datasets-server/pull/1876 | Even more ram for worker: C4 indexing OOMed with 20GB, trying with 24GB
following https://github.com/huggingface/datasets-server/pull/1876 | closed | 2023-09-28T19:52:03Z | 2023-09-29T13:37:29Z | 2023-09-28T22:41:55Z | lhoestq |
1,917,894,019 | cached assets on s3 for all datasets | null | cached assets on s3 for all datasets: | closed | 2023-09-28T16:33:58Z | 2023-09-29T11:22:02Z | 2023-09-28T16:56:44Z | AndreaFrancis |
1,917,856,582 | Clean HF datasets cache | Every 3h for files older than 3h
Also contrary to the delete-duckdb-index job I simply scan the mtime of the directories instead of the mtime of every single file
to be merged after I'm done removing the current 60TB of cache (done !) | Clean HF datasets cache: Every 3h for files older than 3h
Also contrary to the delete-duckdb-index job I simply scan the mtime of the directories instead of the mtime of every single file
to be merged after I'm done removing the current 60TB of cache (done !) | closed | 2023-09-28T16:11:17Z | 2023-10-02T14:14:28Z | 2023-10-02T14:14:27Z | lhoestq |
1,917,508,605 | feat(chart): remove external dns for staging env | We are now using Cloudfront to serve it, and expose assets from S3 | feat(chart): remove external dns for staging env: We are now using Cloudfront to serve it, and expose assets from S3 | closed | 2023-09-28T13:02:45Z | 2023-09-28T13:24:26Z | 2023-09-28T13:24:25Z | rtrompier |
1,917,488,293 | configure staging s3 bucket | null | configure staging s3 bucket: | closed | 2023-09-28T12:53:52Z | 2023-09-28T12:55:26Z | 2023-09-28T12:55:25Z | AndreaFrancis |
1,916,362,740 | change bucket validation in s3 client | null | change bucket validation in s3 client: | closed | 2023-09-27T21:32:16Z | 2023-09-28T02:08:26Z | 2023-09-28T02:08:25Z | AndreaFrancis |
1,916,360,022 | [docs] Reorder query section | As pointed out [here](https://huggingface.slack.com/archives/C05NVC4K0Q4/p1694911765308159?thread_ts=1693329498.070839&cid=C05NVC4K0Q4), this PR alphabetizes the section to be a little tidier :) | [docs] Reorder query section: As pointed out [here](https://huggingface.slack.com/archives/C05NVC4K0Q4/p1694911765308159?thread_ts=1693329498.070839&cid=C05NVC4K0Q4), this PR alphabetizes the section to be a little tidier :) | closed | 2023-09-27T21:30:02Z | 2023-09-28T15:46:20Z | 2023-09-28T07:41:44Z | stevhliu |
1,915,907,681 | Moar ram for c4 indexing | I'm going to try with requests=limits first to make sure it's not because of overcommitting
First I'll try with 20, then 24 if it doesn't work | Moar ram for c4 indexing: I'm going to try with requests=limits first to make sure it's not because of overcommitting
First I'll try with 20, then 24 if it doesn't work | closed | 2023-09-27T16:09:20Z | 2023-09-27T16:11:53Z | 2023-09-27T16:11:51Z | lhoestq |
1,915,906,874 | adding s3 logs in client | null | adding s3 logs in client: | closed | 2023-09-27T16:08:46Z | 2023-09-27T16:21:26Z | 2023-09-27T16:21:25Z | AndreaFrancis |
1,915,833,545 | Add Hub dashboard in admin app | A simple dashboard in gradio to show the number of datasets per type (parquet, json, dataset script etc) and the viewer coverage of trending datasets (can be extended later if needed).
The idea is to follow if users are still using dataset scripts often or if usage is going to our goal: less and less datasets defined with code.
The second idea is to see if the trending datasets are well supported by the datasets-server, which is one of our main priorities.
I used two endpoints for this:
- `/is-valid`
- `/admin/num-dataset-infos-by-builder-name` that I created
Fix https://github.com/huggingface/datasets-server/issues/1857 | Add Hub dashboard in admin app: A simple dashboard in gradio to show the number of datasets per type (parquet, json, dataset script etc) and the viewer coverage of trending datasets (can be extended later if needed).
The idea is to follow if users are still using dataset scripts often or if usage is going to our goal: less and less datasets defined with code.
The second idea is to see if the trending datasets are well supported by the datasets-server, which is one of our main priorities.
I used two endpoints for this:
- `/is-valid`
- `/admin/num-dataset-infos-by-builder-name` that I created
Fix https://github.com/huggingface/datasets-server/issues/1857 | closed | 2023-09-27T15:26:27Z | 2023-09-28T11:46:05Z | 2023-09-28T11:46:04Z | lhoestq |
1,915,795,860 | adding debug logs for rows and search - s3 | null | adding debug logs for rows and search - s3: | closed | 2023-09-27T15:06:01Z | 2023-09-27T15:28:27Z | 2023-09-27T15:28:25Z | AndreaFrancis |
1,915,719,005 | fix(s3): use the correct secrets name | null | fix(s3): use the correct secrets name: | closed | 2023-09-27T14:28:52Z | 2023-09-27T14:39:19Z | 2023-09-27T14:39:18Z | rtrompier |
1,915,637,529 | fix(s3): use the correct secrets | null | fix(s3): use the correct secrets: | closed | 2023-09-27T13:49:50Z | 2023-09-27T13:56:02Z | 2023-09-27T13:56:01Z | rtrompier |
1,915,569,805 | fix: account for possible -1 value in ClassLabel feature in statistics computation | will fix https://github.com/huggingface/datasets-server/issues/1833
It appeared that `ClassLabel` feature might have `-1` (int) value which means "no label", I didn't know that so I end up having the last value of `ClassLabel.names` instead of it lol.
What would be the best way to represent it? It's not a null value. Now I convert it to string form (like `"-1"`). I thought of converting it to something like `"no_label"` or `"(no label)"` (as the viewer currently displays them in rows) but in theory it might match to the existing label specified in `ClassLabel.names` explicitly. Here is an example of response:
```python
{
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 2, # num unique values stays the same as when there is no `no label` values
"frequencies": {
"cat": 10
"dog": 5,
"-1": 10,
}
}
```
Alternatively, I can add a new field "num_no_labels" to store this number instead of having it as a key in "frequencies".
wdyt? | fix: account for possible -1 value in ClassLabel feature in statistics computation: will fix https://github.com/huggingface/datasets-server/issues/1833
It appeared that `ClassLabel` feature might have `-1` (int) value which means "no label", I didn't know that so I end up having the last value of `ClassLabel.names` instead of it lol.
What would be the best way to represent it? It's not a null value. Now I convert it to string form (like `"-1"`). I thought of converting it to something like `"no_label"` or `"(no label)"` (as the viewer currently displays them in rows) but in theory it might match to the existing label specified in `ClassLabel.names` explicitly. Here is an example of response:
```python
{
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 2, # num unique values stays the same as when there is no `no label` values
"frequencies": {
"cat": 10
"dog": 5,
"-1": 10,
}
}
```
Alternatively, I can add a new field "num_no_labels" to store this number instead of having it as a key in "frequencies".
wdyt? | closed | 2023-09-27T13:16:55Z | 2023-09-29T15:55:32Z | 2023-09-29T15:55:30Z | polinaeterna |
1,915,474,633 | docs: ✏️ add two new column types + examples for /statistics | null | docs: ✏️ add two new column types + examples for /statistics: | closed | 2023-09-27T12:31:40Z | 2023-09-27T12:36:03Z | 2023-09-27T12:34:28Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.