id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
⌀ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,705,370,031 | feat: 🎸 upgrade tensorflow | fix security vulnerability | feat: 🎸 upgrade tensorflow: fix security vulnerability | closed | 2023-05-11T08:53:05Z | 2023-05-12T15:02:49Z | 2023-05-12T14:59:22Z | severo |
1,705,358,652 | Remove "force" field from the queue Jobs | Now that all the jobs are created by the DatasetState.backfill() method, we don't want to skip jobs.
Another PR will remove the "skip job" mechanism | Remove "force" field from the queue Jobs: Now that all the jobs are created by the DatasetState.backfill() method, we don't want to skip jobs.
Another PR will remove the "skip job" mechanism | closed | 2023-05-11T08:46:06Z | 2023-05-12T15:17:06Z | 2023-05-12T15:14:20Z | severo |
1,705,352,346 | Remove force field in queue Jobs | null | Remove force field in queue Jobs: | closed | 2023-05-11T08:42:35Z | 2023-05-11T08:48:48Z | 2023-05-11T08:45:47Z | severo |
1,705,210,602 | Catch commit conflit in parquet branch | See https://huggingface.co/datasets/cais/mmlu/discussions/9.
We have to catch the exception (412) here: https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/parquet_and_info.py#L904-L911
and raise a retriable exception so that the error is only temporary. | Catch commit conflit in parquet branch: See https://huggingface.co/datasets/cais/mmlu/discussions/9.
We have to catch the exception (412) here: https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/parquet_and_info.py#L904-L911
and raise a retriable exception so that the error is only temporary. | closed | 2023-05-11T07:11:37Z | 2023-06-20T12:37:09Z | 2023-06-20T12:37:09Z | severo |
1,704,324,689 | feat: 🎸 upgrade libcommon in all the code | following #1158 | feat: 🎸 upgrade libcommon in all the code: following #1158 | closed | 2023-05-10T17:05:23Z | 2023-05-11T07:03:17Z | 2023-05-11T07:00:13Z | severo |
1,703,962,956 | Overcommit a bit less | Reduce the overcommitment to avoid having too many nodes killed by was. I also reduced a lot the number of workers, because we currently don't need as many. | Overcommit a bit less: Reduce the overcommitment to avoid having too many nodes killed by was. I also reduced a lot the number of workers, because we currently don't need as many. | closed | 2023-05-10T13:40:11Z | 2023-05-10T14:04:49Z | 2023-05-10T14:04:48Z | severo |
1,703,861,975 | feat: 🎸 use the cached response dataset-is-valid | instead of computing it everytime | feat: 🎸 use the cached response dataset-is-valid: instead of computing it everytime | closed | 2023-05-10T12:52:04Z | 2023-05-10T16:05:30Z | 2023-05-10T16:05:29Z | severo |
1,703,778,333 | Dataset Viewer issue for code_search_net | ### Link
https://huggingface.co/datasets/code_search_net
### Description
The dataset viewer is not working for dataset code_search_net.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for code_search_net: ### Link
https://huggingface.co/datasets/code_search_net
### Description
The dataset viewer is not working for dataset code_search_net.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-10T12:04:45Z | 2023-05-10T15:30:15Z | 2023-05-10T15:30:15Z | lenglengcsy |
1,703,454,339 | Replace legacy hffs dependency with huggingface-hub | This PR removes legacy `hffs` dependency and uses `huggingface-hub` instead.
Fix #1150. | Replace legacy hffs dependency with huggingface-hub: This PR removes legacy `hffs` dependency and uses `huggingface-hub` instead.
Fix #1150. | closed | 2023-05-10T09:07:47Z | 2023-05-10T15:23:22Z | 2023-05-10T15:23:21Z | albertvillanova |
1,703,393,628 | feat: 🎸 do full backfill instead of creating jobs for children | it will reduce the incoherencies in the cache. In particular, parallel steps lead to deleting valid cache entries. | feat: 🎸 do full backfill instead of creating jobs for children: it will reduce the incoherencies in the cache. In particular, parallel steps lead to deleting valid cache entries. | closed | 2023-05-10T08:36:52Z | 2023-05-10T16:04:59Z | 2023-05-10T16:04:57Z | severo |
1,703,286,401 | Fix flaky executor test in services/worker when the job takes too much time | See https://github.com/huggingface/datasets-server/pull/1147#issuecomment-1541483075
| Fix flaky executor test in services/worker when the job takes too much time: See https://github.com/huggingface/datasets-server/pull/1147#issuecomment-1541483075
| closed | 2023-05-10T07:22:53Z | 2023-07-26T15:28:17Z | 2023-07-26T15:28:17Z | severo |
1,703,259,301 | Dataset Viewer issue for openclimatefix/goes-mrms | ### Link
https://huggingface.co/datasets/openclimatefix/goes-mrms
### Description
The dataset viewer is not working for dataset openclimatefix/goes-mrms.
Error details:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/workers/datasets_based/openclimatefix/goes-mrms/goes-mrms.py or any data file in the same directory. Couldn't find 'openclimatefix/goes-mrms' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**'] in dataset repository openclimatefix/goes-mrms with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='openclimatefix/goes-mrms' config=None split=None---Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1213, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/workers/datasets_based/openclimatefix/goes-mrms/goes-mrms.py or any data file in the same directory. Couldn't find 'openclimatefix/goes-mrms' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**'] in dataset repository openclimatefix/goes-mrms with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']
```
| Dataset Viewer issue for openclimatefix/goes-mrms: ### Link
https://huggingface.co/datasets/openclimatefix/goes-mrms
### Description
The dataset viewer is not working for dataset openclimatefix/goes-mrms.
Error details:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/workers/datasets_based/openclimatefix/goes-mrms/goes-mrms.py or any data file in the same directory. Couldn't find 'openclimatefix/goes-mrms' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**'] in dataset repository openclimatefix/goes-mrms with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='openclimatefix/goes-mrms' config=None split=None---Traceback (most recent call last):
File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1213, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/workers/datasets_based/openclimatefix/goes-mrms/goes-mrms.py or any data file in the same directory. Couldn't find 'openclimatefix/goes-mrms' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**'] in dataset repository openclimatefix/goes-mrms with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']
```
| closed | 2023-05-10T07:03:28Z | 2023-05-12T08:48:31Z | 2023-05-12T08:48:31Z | liusir1632 |
1,702,835,504 | Dataset Viewer issue for eli5 | ### Link
https://huggingface.co/datasets/eli5
### Description
The dataset viewer is not working for dataset eli5.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for eli5: ### Link
https://huggingface.co/datasets/eli5
### Description
The dataset viewer is not working for dataset eli5.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-09T22:27:46Z | 2023-05-10T05:32:24Z | 2023-05-10T05:32:23Z | surya-narayanan |
1,702,687,600 | Dataset Viewer issue for GEM/wiki_lingua | ### Link
https://huggingface.co/datasets/GEM/wiki_lingua
### Description
The dataset viewer is not working for dataset GEM/wiki_lingua.
Error details:
```
Error code: JobRunnerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='GEM/wiki_lingua' config=None split=None---
```
| Dataset Viewer issue for GEM/wiki_lingua: ### Link
https://huggingface.co/datasets/GEM/wiki_lingua
### Description
The dataset viewer is not working for dataset GEM/wiki_lingua.
Error details:
```
Error code: JobRunnerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='GEM/wiki_lingua' config=None split=None---
```
| closed | 2023-05-09T20:08:13Z | 2023-05-10T07:09:32Z | 2023-05-10T07:09:31Z | surya-narayanan |
1,702,676,365 | Dataset Viewer issue for hendrycks_test | ### Link
https://huggingface.co/datasets/hendrycks_test
### Description
The dataset viewer is not working for dataset hendrycks_test.
Error details:
```
Error code: ExternalFilesSizeRequestConnectionError
Exception: ConnectionError
Message: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
Traceback: The previous step failed, the error is copied to this step: kind='config-parquet' dataset='hendrycks_test' config='philosophy' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='hendrycks_test' config='philosophy' split=None---Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 624, in raise_if_too_big_from_external_data_files
for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):
File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next
raise value
File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 519, in _request_size
response = http_head(url, headers=headers, max_retries=3)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 413, in http_head
response = _request_with_retry(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 324, in _request_with_retry
raise err
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 320, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
```
| Dataset Viewer issue for hendrycks_test: ### Link
https://huggingface.co/datasets/hendrycks_test
### Description
The dataset viewer is not working for dataset hendrycks_test.
Error details:
```
Error code: ExternalFilesSizeRequestConnectionError
Exception: ConnectionError
Message: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
Traceback: The previous step failed, the error is copied to this step: kind='config-parquet' dataset='hendrycks_test' config='philosophy' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='hendrycks_test' config='philosophy' split=None---Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 624, in raise_if_too_big_from_external_data_files
for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):
File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next
raise value
File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 519, in _request_size
response = http_head(url, headers=headers, max_retries=3)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 413, in http_head
response = _request_with_retry(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 324, in _request_with_retry
raise err
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 320, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fdb011d5cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
```
| closed | 2023-05-09T19:59:33Z | 2023-05-10T07:31:50Z | 2023-05-10T07:31:50Z | surya-narayanan |
1,702,665,928 | Dataset Viewer issue for reddit | ### Link
https://huggingface.co/datasets/reddit
### Description
The dataset viewer is not working for dataset reddit.
Error details:
```
Error code: DatasetTooBigFromDatasetsError
Traceback: The previous step failed, the error is copied to this step: kind='config-parquet' dataset='reddit' config='default' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='reddit' config='default' split=None---
```
| Dataset Viewer issue for reddit: ### Link
https://huggingface.co/datasets/reddit
### Description
The dataset viewer is not working for dataset reddit.
Error details:
```
Error code: DatasetTooBigFromDatasetsError
Traceback: The previous step failed, the error is copied to this step: kind='config-parquet' dataset='reddit' config='default' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='reddit' config='default' split=None---
```
| closed | 2023-05-09T19:51:12Z | 2023-05-17T07:42:43Z | 2023-05-17T07:42:43Z | surya-narayanan |
1,702,161,945 | Use of hffs dependency is legacy | The `hffs`dependency is legacy and their repo has been archived: https://github.com/huggingface/hffs
We should use `huggingface-hub` >= 0.14 instead. | Use of hffs dependency is legacy: The `hffs`dependency is legacy and their repo has been archived: https://github.com/huggingface/hffs
We should use `huggingface-hub` >= 0.14 instead. | closed | 2023-05-09T14:29:19Z | 2023-05-10T15:23:23Z | 2023-05-10T15:23:23Z | albertvillanova |
1,702,012,942 | Use state for job creation | We centralize the creation of queue jobs using DatasetState.backfill(), for:
-> webhook
-> endpoint in services/api (when the response is not found, we analyze if the cache should have existed, or is in progress, and respond adequately + launch the job if needed)
Note that jobs are also created by the job runners when they finish. We will change that in another PR (https://github.com/huggingface/datasets-server/pull/1157). | Use state for job creation: We centralize the creation of queue jobs using DatasetState.backfill(), for:
-> webhook
-> endpoint in services/api (when the response is not found, we analyze if the cache should have existed, or is in progress, and respond adequately + launch the job if needed)
Note that jobs are also created by the job runners when they finish. We will change that in another PR (https://github.com/huggingface/datasets-server/pull/1157). | closed | 2023-05-09T13:13:29Z | 2023-05-10T13:33:17Z | 2023-05-10T13:33:16Z | severo |
1,701,809,421 | Dataset Viewer issue for renumics/dcase23-task2-enriched | ### Link
https://huggingface.co/datasets/renumics/dcase23-task2-enriched
### Description
The dataset viewer is not working for dataset renumics/dcase23-task2-enriched.
Error details:
```
Error code: DatasetInfoHubRequestError
```
| Dataset Viewer issue for renumics/dcase23-task2-enriched: ### Link
https://huggingface.co/datasets/renumics/dcase23-task2-enriched
### Description
The dataset viewer is not working for dataset renumics/dcase23-task2-enriched.
Error details:
```
Error code: DatasetInfoHubRequestError
```
| closed | 2023-05-09T10:28:11Z | 2023-05-12T08:57:13Z | 2023-05-12T08:57:13Z | SYoy |
1,701,455,051 | Update datasets dependency to 2.12.0 version | After 2.12.0 datasets release, update dependencies on it.
Fix #1099. | Update datasets dependency to 2.12.0 version: After 2.12.0 datasets release, update dependencies on it.
Fix #1099. | closed | 2023-05-09T06:43:41Z | 2023-05-10T07:43:20Z | 2023-05-10T07:40:21Z | albertvillanova |
1,701,078,461 | Separate job runner compute logic | This is a proposal to separate management functions logic like run, backfill,skip,process, set_crashed, exceed_maximun_duration, ect.) from job runner. Job runner only compute, pre-compute and post-compute a response without other responsabilities (Like a operator).
This will make easier to switch to an orchestrator some day.
I introduce "job_manager" class which will be in charge of doing extra actions/ validations related to job pipelines an will invoke job runner whenever it is needed to compute.
A job runner do not need to know about other actions like store cache, validate if there already exist a parallel cache, validate if it has to skip the compute or other actions different than "compute" the response.
I also introduce parent classes dataset_job_runner, config_job_runner and split_job_runner (They are different in the attributes needed by each operator, it will close https://github.com/huggingface/datasets-server/issues/1074).
I think this way we could extend in the future more levels of granularity (Like partitions for example for https://github.com/huggingface/datasets-server/issues/1087
Sorry if there are many files but don't be afraid, most of them are related to changes in the imports because I moved some classes/functions to other files and changes on the tests now that process method does not exist in job runner. | Separate job runner compute logic : This is a proposal to separate management functions logic like run, backfill,skip,process, set_crashed, exceed_maximun_duration, ect.) from job runner. Job runner only compute, pre-compute and post-compute a response without other responsabilities (Like a operator).
This will make easier to switch to an orchestrator some day.
I introduce "job_manager" class which will be in charge of doing extra actions/ validations related to job pipelines an will invoke job runner whenever it is needed to compute.
A job runner do not need to know about other actions like store cache, validate if there already exist a parallel cache, validate if it has to skip the compute or other actions different than "compute" the response.
I also introduce parent classes dataset_job_runner, config_job_runner and split_job_runner (They are different in the attributes needed by each operator, it will close https://github.com/huggingface/datasets-server/issues/1074).
I think this way we could extend in the future more levels of granularity (Like partitions for example for https://github.com/huggingface/datasets-server/issues/1087
Sorry if there are many files but don't be afraid, most of them are related to changes in the imports because I moved some classes/functions to other files and changes on the tests now that process method does not exist in job runner. | closed | 2023-05-08T23:29:31Z | 2023-05-11T20:10:01Z | 2023-05-11T16:04:39Z | AndreaFrancis |
1,699,075,979 | Dataset Viewer issue for wangrongsheng/MedDialog-1.1M | ### Link
https://huggingface.co/datasets/wangrongsheng/MedDialog-1.1M
### Description
The dataset viewer is not working for dataset wangrongsheng/MedDialog-1.1M.
Error details:
```
Error code: JobRunnerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='config-parquet' dataset='wangrongsheng/MedDialog-1.1M' config='wangrongsheng--MedDialog-1.1M' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='wangrongsheng/MedDialog-1.1M' config='wangrongsheng--MedDialog-1.1M' split=None---
```
| Dataset Viewer issue for wangrongsheng/MedDialog-1.1M: ### Link
https://huggingface.co/datasets/wangrongsheng/MedDialog-1.1M
### Description
The dataset viewer is not working for dataset wangrongsheng/MedDialog-1.1M.
Error details:
```
Error code: JobRunnerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='config-parquet' dataset='wangrongsheng/MedDialog-1.1M' config='wangrongsheng--MedDialog-1.1M' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='wangrongsheng/MedDialog-1.1M' config='wangrongsheng--MedDialog-1.1M' split=None---
```
| closed | 2023-05-07T13:56:01Z | 2023-05-09T03:00:22Z | 2023-05-09T03:00:22Z | WangRongsheng |
1,697,681,526 | Unsupport image and audio in /rows | ...until we fix the timeout issues | Unsupport image and audio in /rows: ...until we fix the timeout issues | closed | 2023-05-05T14:04:59Z | 2023-05-05T14:23:45Z | 2023-05-05T14:20:43Z | lhoestq |
1,697,082,475 | Dataset Viewer issue for Numerati/numerai-datasets | ### Link
https://huggingface.co/datasets/Numerati/numerai-datasets
### Description
The dataset viewer is not working for dataset Numerati/numerai-datasets.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for Numerati/numerai-datasets: ### Link
https://huggingface.co/datasets/Numerati/numerai-datasets
### Description
The dataset viewer is not working for dataset Numerati/numerai-datasets.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-05T06:32:37Z | 2023-05-12T11:14:29Z | 2023-05-12T09:03:52Z | roxyrong |
1,696,696,355 | Moving input (dataset,config,split) params validation to parent job runners | Replaces https://github.com/huggingface/datasets-server/pull/1079
Closes https://github.com/huggingface/datasets-server/issues/1074 | Moving input (dataset,config,split) params validation to parent job runners: Replaces https://github.com/huggingface/datasets-server/pull/1079
Closes https://github.com/huggingface/datasets-server/issues/1074 | closed | 2023-05-04T21:01:26Z | 2023-10-10T13:29:50Z | 2023-05-09T20:37:59Z | AndreaFrancis |
1,696,329,308 | Delete `dataset-split-names-from-dataset-info` job runner | Part of https://github.com/huggingface/datasets-server/issues/1086 | Delete `dataset-split-names-from-dataset-info` job runner: Part of https://github.com/huggingface/datasets-server/issues/1086 | closed | 2023-05-04T16:36:03Z | 2023-05-05T17:07:59Z | 2023-05-05T17:04:22Z | polinaeterna |
1,695,874,766 | Increase api replicas + skip cacheMaintenance | null | Increase api replicas + skip cacheMaintenance: | closed | 2023-05-04T12:02:51Z | 2023-05-04T12:07:30Z | 2023-05-04T12:04:30Z | lhoestq |
1,695,031,349 | Dataset Viewer issue for databricks/databricks-dolly-15k | ### Link
https://huggingface.co/datasets/databricks/databricks-dolly-15k
### Description
Hi all, I see that some other datasets may be having issues now, so this is probably related, but wanted to report the issues I started seeing today at `databricks/databricks-dolly-15k`.
After I committed a change to the dataset file, I see that the viewer no longer works. The update seemed normal, following the process suggested for creating a pull request and merging. It doesn't seem like it's a problem with the data, but I'm not clear:
'Response has already been computed and stored in cache kind: split-first-rows-from-streaming. Compute will be skipped."
```
Error code: ResponseAlreadyComputedError
```
I also find that the dataset is not downloadable. It yields a 403 / Access Denied error via the `datasets` library or through the web UI. https://huggingface.co/datasets/databricks/databricks-dolly-15k/resolve/main/databricks-dolly-15k.jsonl
Thank you for any help you might be able to provide, or pointers if we did something wrong. | Dataset Viewer issue for databricks/databricks-dolly-15k: ### Link
https://huggingface.co/datasets/databricks/databricks-dolly-15k
### Description
Hi all, I see that some other datasets may be having issues now, so this is probably related, but wanted to report the issues I started seeing today at `databricks/databricks-dolly-15k`.
After I committed a change to the dataset file, I see that the viewer no longer works. The update seemed normal, following the process suggested for creating a pull request and merging. It doesn't seem like it's a problem with the data, but I'm not clear:
'Response has already been computed and stored in cache kind: split-first-rows-from-streaming. Compute will be skipped."
```
Error code: ResponseAlreadyComputedError
```
I also find that the dataset is not downloadable. It yields a 403 / Access Denied error via the `datasets` library or through the web UI. https://huggingface.co/datasets/databricks/databricks-dolly-15k/resolve/main/databricks-dolly-15k.jsonl
Thank you for any help you might be able to provide, or pointers if we did something wrong. | closed | 2023-05-04T00:46:02Z | 2023-05-05T01:03:54Z | 2023-05-05T01:03:54Z | srowen |
1,695,029,776 | Dataset Viewer issue for databricks/databricks-dolly-15k | ### Link
https://huggingface.co/datasets/databricks/databricks-dolly-15k
### Description
The dataset viewer is not working for dataset databricks/databricks-dolly-15k.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| Dataset Viewer issue for databricks/databricks-dolly-15k: ### Link
https://huggingface.co/datasets/databricks/databricks-dolly-15k
### Description
The dataset viewer is not working for dataset databricks/databricks-dolly-15k.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| closed | 2023-05-04T00:42:56Z | 2023-05-04T00:50:29Z | 2023-05-04T00:50:29Z | calam1 |
1,694,799,806 | Non-working datasets in the first page of hf.co/datasets | As of today (2023/05/03), the dataset viewer does not work for the following datasets, in the list of the 30 most downloaded datasets (https://huggingface.co/datasets):
- [x] https://huggingface.co/datasets/allenai/nllb, `Error code: JobRunnerCrashedError`
- [ ] https://huggingface.co/datasets/facebook/flores, `Error code: JobRunnerCrashedError`
- [x] https://huggingface.co/datasets/allenai/c4, `Error code: JobRunnerCrashedError`
- [x] https://huggingface.co/datasets/lukaemon/mmlu, `Error code: StreamingRowsError`
- [x] https://huggingface.co/datasets/piqa, `[Errno 2] No such file or directory: '/datasets-server-cache/all/datasets/2023-05-03-18-48-58-config-parquet-and-info-piqa-f5651569/piqa/plain_text/1.1.0/6c611c1a9bf220943c4174e117d3b660859665baf1d43156230116185312d011/piqa-train.parquet'
`
- [x] https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt, `Error code: JobRunnerCrashedError`
- [x] https://huggingface.co/datasets/hendrycks_test, `Couldn't get the size of external files in `_split_generators` because a request failed: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f15f029aac0>: Failed to establish a new connection: [Errno 111] Connection refused')) Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).`
- [x] https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing, error in some cells: `ERROR: type should be images list, got [ null, { "src": "https://datasets-server.huggingface.co/assets/HuggingFaceM4/cm4-synthetic-testing/--/100.repeat/100.unique/0/images/image-1d300ea.jpg", "height": 19, "width": 32 }, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]` - see https://github.com/huggingface/moon-landing/pull/6218 (internal)
- [x] https://huggingface.co/datasets/GEM/wiki_lingua, `Error code: JobRunnerCrashedError`
| Non-working datasets in the first page of hf.co/datasets: As of today (2023/05/03), the dataset viewer does not work for the following datasets, in the list of the 30 most downloaded datasets (https://huggingface.co/datasets):
- [x] https://huggingface.co/datasets/allenai/nllb, `Error code: JobRunnerCrashedError`
- [ ] https://huggingface.co/datasets/facebook/flores, `Error code: JobRunnerCrashedError`
- [x] https://huggingface.co/datasets/allenai/c4, `Error code: JobRunnerCrashedError`
- [x] https://huggingface.co/datasets/lukaemon/mmlu, `Error code: StreamingRowsError`
- [x] https://huggingface.co/datasets/piqa, `[Errno 2] No such file or directory: '/datasets-server-cache/all/datasets/2023-05-03-18-48-58-config-parquet-and-info-piqa-f5651569/piqa/plain_text/1.1.0/6c611c1a9bf220943c4174e117d3b660859665baf1d43156230116185312d011/piqa-train.parquet'
`
- [x] https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt, `Error code: JobRunnerCrashedError`
- [x] https://huggingface.co/datasets/hendrycks_test, `Couldn't get the size of external files in `_split_generators` because a request failed: HTTPSConnectionPool(host='people.eecs.berkeley.edu', port=443): Max retries exceeded with url: /~hendrycks/data.tar (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f15f029aac0>: Failed to establish a new connection: [Errno 111] Connection refused')) Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).`
- [x] https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing, error in some cells: `ERROR: type should be images list, got [ null, { "src": "https://datasets-server.huggingface.co/assets/HuggingFaceM4/cm4-synthetic-testing/--/100.repeat/100.unique/0/images/image-1d300ea.jpg", "height": 19, "width": 32 }, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]` - see https://github.com/huggingface/moon-landing/pull/6218 (internal)
- [x] https://huggingface.co/datasets/GEM/wiki_lingua, `Error code: JobRunnerCrashedError`
| closed | 2023-05-03T20:49:06Z | 2023-06-27T12:39:06Z | 2023-06-27T09:44:54Z | severo |
1,694,795,699 | Fix parent class for split-names-from-dataset-info and first-rows-from-parquet | split-names-from-dataset-info and first-rows-from-parquet don't need to inherit from DatasetsBasedJobRunner, changing to JobRunner | Fix parent class for split-names-from-dataset-info and first-rows-from-parquet: split-names-from-dataset-info and first-rows-from-parquet don't need to inherit from DatasetsBasedJobRunner, changing to JobRunner | closed | 2023-05-03T20:47:15Z | 2023-05-03T21:35:37Z | 2023-05-03T21:32:48Z | AndreaFrancis |
1,694,639,521 | fix: 🐛 don't refresh the UnexpectedError entries in next sync | null | fix: 🐛 don't refresh the UnexpectedError entries in next sync: | closed | 2023-05-03T19:08:36Z | 2023-05-03T19:12:26Z | 2023-05-03T19:09:23Z | severo |
1,694,247,342 | Re-lower row group | Re-add the changes from https://github.com/huggingface/datasets-server/pull/833 that were inadvertently removed by https://github.com/huggingface/datasets-server/pull/985
close https://github.com/huggingface/datasets-server/issues/1127 | Re-lower row group: Re-add the changes from https://github.com/huggingface/datasets-server/pull/833 that were inadvertently removed by https://github.com/huggingface/datasets-server/pull/985
close https://github.com/huggingface/datasets-server/issues/1127 | closed | 2023-05-03T14:57:26Z | 2023-05-03T16:31:48Z | 2023-05-03T16:28:36Z | lhoestq |
1,694,176,228 | Delete preview data when a dataset preview is disabled | When a dataset preview is disabled (via metadata or when switched to private) we should delete the cache entry in the db and delete the cached assets on the nfs drive (i.e. images/audio files) | Delete preview data when a dataset preview is disabled: When a dataset preview is disabled (via metadata or when switched to private) we should delete the cache entry in the db and delete the cached assets on the nfs drive (i.e. images/audio files) | closed | 2023-05-03T14:21:44Z | 2023-06-02T16:07:15Z | 2023-06-02T16:07:15Z | lhoestq |
1,694,017,208 | Plot processing graph | I updated admin-ui with the new `Processing graph` tab with `Plot processing graph` button. I wasn't sure that the plot would have appropriate size by default for all screens so I added sliders for width and height:

I doesn't look very nice but okay for the first version?
I also updated some requirements, specifically `gradio`, because the old version of `gradio` wasn't rendering plots well.
I think it was a bad idea to update `poetry.lock` because now it has all the `libcommon` (from local path) dependencies. Maybe remove it completely from this directory? In the ui `libcommon` is installed from git, main branch. | Plot processing graph: I updated admin-ui with the new `Processing graph` tab with `Plot processing graph` button. I wasn't sure that the plot would have appropriate size by default for all screens so I added sliders for width and height:

I doesn't look very nice but okay for the first version?
I also updated some requirements, specifically `gradio`, because the old version of `gradio` wasn't rendering plots well.
I think it was a bad idea to update `poetry.lock` because now it has all the `libcommon` (from local path) dependencies. Maybe remove it completely from this directory? In the ui `libcommon` is installed from git, main branch. | closed | 2023-05-03T12:52:10Z | 2023-05-12T11:59:22Z | 2023-05-12T11:56:40Z | polinaeterna |
1,693,920,893 | fix: 🐛 fix the URL for /admin/force-refresh | The URL is weird, but while some steps still start with `/`, we are a bit stuck with that (otherwise, the double slash would be converted to one slash, and would break the route). | fix: 🐛 fix the URL for /admin/force-refresh: The URL is weird, but while some steps still start with `/`, we are a bit stuck with that (otherwise, the double slash would be converted to one slash, and would break the route). | closed | 2023-05-03T11:51:31Z | 2023-05-03T12:01:32Z | 2023-05-03T11:58:38Z | severo |
1,693,785,314 | Unable to refresh certain jobs in the admin UI | e.g. `config-parquet-and-info` on cnn_dailymail 1.0.0 returns
```
[cnn_dailymail] ❌ Failed to add processing step to the queue. Error 404: b'Not Found'
``` | Unable to refresh certain jobs in the admin UI: e.g. `config-parquet-and-info` on cnn_dailymail 1.0.0 returns
```
[cnn_dailymail] ❌ Failed to add processing step to the queue. Error 404: b'Not Found'
``` | closed | 2023-05-03T10:18:37Z | 2023-05-03T12:03:50Z | 2023-05-03T12:03:50Z | lhoestq |
1,693,777,333 | Dataset viewer issue: UnexpectedError on page 1 even if the parquet job succeeded | see https://huggingface.co/datasets/xnli/viewer/all_languages/train?p=1
The admin page shows that there is an error `FileSystemError`:
type | dataset | config | split | http_status | error_code | job_runner_version | dataset_git_revision | progress | updated_at | details
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
split-first-rows-from-parquet | xnli | all_languages | train | 500 | FileSystemError | 2 | 1cdcf07be24d81f3d782038a5a0b9c8d62f76e60 | | 2023-04-28T13:29:12.187000 | {"error": "Could not read the parquet files: 416 Client Error: Requested Range Not Satisfiable for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/datasets/xnli/...
| Dataset viewer issue: UnexpectedError on page 1 even if the parquet job succeeded: see https://huggingface.co/datasets/xnli/viewer/all_languages/train?p=1
The admin page shows that there is an error `FileSystemError`:
type | dataset | config | split | http_status | error_code | job_runner_version | dataset_git_revision | progress | updated_at | details
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
split-first-rows-from-parquet | xnli | all_languages | train | 500 | FileSystemError | 2 | 1cdcf07be24d81f3d782038a5a0b9c8d62f76e60 | | 2023-04-28T13:29:12.187000 | {"error": "Could not read the parquet files: 416 Client Error: Requested Range Not Satisfiable for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/datasets/xnli/...
| closed | 2023-05-03T10:12:22Z | 2023-06-14T12:11:57Z | 2023-06-14T12:11:57Z | lhoestq |
1,693,764,900 | Dataset viewer issue: first page is missing elements | See https://huggingface.co/datasets/gem/viewer/totto/train?p=0
the first page of the viewer shows 10 rows instead of 100. The other pages are correctly showing 100 rows though | Dataset viewer issue: first page is missing elements: See https://huggingface.co/datasets/gem/viewer/totto/train?p=0
the first page of the viewer shows 10 rows instead of 100. The other pages are correctly showing 100 rows though | closed | 2023-05-03T10:04:14Z | 2023-06-02T15:55:17Z | 2023-06-02T15:55:11Z | lhoestq |
1,693,738,943 | Too large row group size for parquet exports of image datasets | See https://huggingface.co/datasets/sasha/dog-food/blob/refs%2Fconvert%2Fparquet/sasha--dog-food/parquet-train.parquet which still has a row group size of 1K even though it's supposed to be 100 after #833 | Too large row group size for parquet exports of image datasets: See https://huggingface.co/datasets/sasha/dog-food/blob/refs%2Fconvert%2Fparquet/sasha--dog-food/parquet-train.parquet which still has a row group size of 1K even though it's supposed to be 100 after #833 | closed | 2023-05-03T09:50:30Z | 2023-05-03T16:28:38Z | 2023-05-03T16:28:38Z | lhoestq |
1,693,732,368 | Dataset viewer issue for wikitext: wrong number of pages | It shows only 44 pages at https://huggingface.co/datasets/wikitext even though the dataset has more than 1M rows | Dataset viewer issue for wikitext: wrong number of pages: It shows only 44 pages at https://huggingface.co/datasets/wikitext even though the dataset has more than 1M rows | closed | 2023-05-03T09:46:09Z | 2023-05-03T14:41:26Z | 2023-05-03T12:07:53Z | lhoestq |
1,693,548,095 | Fix dataset split names | - we had a bug in the name of a field in dataset-info entry
- but I removed all the code related to the dataset-info entry, since it's not necessary. | Fix dataset split names: - we had a bug in the name of a field in dataset-info entry
- but I removed all the code related to the dataset-info entry, since it's not necessary. | closed | 2023-05-03T07:32:25Z | 2023-05-03T15:29:06Z | 2023-05-03T15:26:06Z | severo |
1,693,057,802 | Remove spawning url content from API /opt-in-out-urls | Closes https://github.com/huggingface/datasets-server/issues/1121
A new job runner is created that will copy only the num values from previous response split-opt-in-out-urls-scan.
This new cache kind will be the new response for /opt-in-out-urls endpoint | Remove spawning url content from API /opt-in-out-urls: Closes https://github.com/huggingface/datasets-server/issues/1121
A new job runner is created that will copy only the num values from previous response split-opt-in-out-urls-scan.
This new cache kind will be the new response for /opt-in-out-urls endpoint | closed | 2023-05-02T20:28:01Z | 2023-05-02T20:52:09Z | 2023-05-02T20:48:08Z | AndreaFrancis |
1,693,019,779 | Use a dedicated error code when the error is due to disk | Error:
```
[Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-17-52-02-split-first-rows-from-parquet-abidlabs-testi-d85cb7a1'
``` | Use a dedicated error code when the error is due to disk: Error:
```
[Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-17-52-02-split-first-rows-from-parquet-abidlabs-testi-d85cb7a1'
``` | closed | 2023-05-02T19:57:52Z | 2023-08-17T15:43:05Z | 2023-08-17T15:43:04Z | severo |
1,692,997,262 | Create a cronjob to clean the dangling cache directories | Currently, the disk used to store the `datasets` library cache for the workers is full (shortage of inodes). It's mainly due to dangling cache directories from days or weeks. We should create a cronjob that runs every day, and deletes the directories older than 2 days. | Create a cronjob to clean the dangling cache directories: Currently, the disk used to store the `datasets` library cache for the workers is full (shortage of inodes). It's mainly due to dangling cache directories from days or weeks. We should create a cronjob that runs every day, and deletes the directories older than 2 days. | closed | 2023-05-02T19:41:04Z | 2024-02-06T19:11:49Z | 2024-02-06T11:20:24Z | severo |
1,692,974,093 | Remove urls content in Spawning response | Currently SplitOptInOutJobRunner job runner displays following fields:
```
urls_columns: List[str]
num_opt_in_urls: int
num_opt_out_urls: int
num_urls: int
num_scanned_rows: int
has_urls_columns: bool
opt_in_urls: List[OptUrl]
opt_out_urls: List[OptUrl]
```
For first phase, `opt_in_urls` and `opt_out_urls` shouldn't be shown meanwhile we found a better way to store it.
cc. @julien-c @severo | Remove urls content in Spawning response: Currently SplitOptInOutJobRunner job runner displays following fields:
```
urls_columns: List[str]
num_opt_in_urls: int
num_opt_out_urls: int
num_urls: int
num_scanned_rows: int
has_urls_columns: bool
opt_in_urls: List[OptUrl]
opt_out_urls: List[OptUrl]
```
For first phase, `opt_in_urls` and `opt_out_urls` shouldn't be shown meanwhile we found a better way to store it.
cc. @julien-c @severo | closed | 2023-05-02T19:22:24Z | 2023-05-02T20:48:09Z | 2023-05-02T20:48:09Z | AndreaFrancis |
1,692,944,306 | Dataset Viewer issue for JDaniel423/running-records-errors-dataset | ### Link
https://huggingface.co/datasets/JDaniel423/running-records-errors-dataset
### Description
The dataset viewer is not working for dataset JDaniel423/running-records-errors-dataset.
Error details:
```
Error code: UnexpectedError
[Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-18-57-56--config-names-JDaniel423-running-records-err-35b8eb1c'
```
| Dataset Viewer issue for JDaniel423/running-records-errors-dataset: ### Link
https://huggingface.co/datasets/JDaniel423/running-records-errors-dataset
### Description
The dataset viewer is not working for dataset JDaniel423/running-records-errors-dataset.
Error details:
```
Error code: UnexpectedError
[Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-18-57-56--config-names-JDaniel423-running-records-err-35b8eb1c'
```
| closed | 2023-05-02T19:00:08Z | 2023-05-03T18:23:06Z | 2023-05-03T18:23:06Z | JDaniel41 |
1,692,727,035 | Dataset Viewer issue for george-chou/pianos | ### Link
https://huggingface.co/datasets/george-chou/pianos
### Description
The dataset viewer is not working for dataset george-chou/pianos.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-15-38--config-names-george-chou-pianos-e5243fc3/downloads/tmp8c_j36d_'
Traceback: The previous step failed, the error is copied to this step: kind='dataset-info' dataset='george-chou/pianos' config=None split=None---The previous step failed, the error is copied to this step: kind='/config-names' dataset='george-chou/pianos' config=None split=None---Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config_names.py", line 99, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1184, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 901, in get_module
local_path = self.download_loading_script()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 869, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 602, in get_from_cache
with temp_file_manager() as temp_file:
File "/usr/local/lib/python3.9/tempfile.py", line 545, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/usr/local/lib/python3.9/tempfile.py", line 255, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
OSError: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-15-38--config-names-george-chou-pianos-e5243fc3/downloads/tmp8c_j36d_'
```
| Dataset Viewer issue for george-chou/pianos: ### Link
https://huggingface.co/datasets/george-chou/pianos
### Description
The dataset viewer is not working for dataset george-chou/pianos.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-15-38--config-names-george-chou-pianos-e5243fc3/downloads/tmp8c_j36d_'
Traceback: The previous step failed, the error is copied to this step: kind='dataset-info' dataset='george-chou/pianos' config=None split=None---The previous step failed, the error is copied to this step: kind='/config-names' dataset='george-chou/pianos' config=None split=None---Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config_names.py", line 99, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1184, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 901, in get_module
local_path = self.download_loading_script()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 869, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 602, in get_from_cache
with temp_file_manager() as temp_file:
File "/usr/local/lib/python3.9/tempfile.py", line 545, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/usr/local/lib/python3.9/tempfile.py", line 255, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
OSError: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-15-38--config-names-george-chou-pianos-e5243fc3/downloads/tmp8c_j36d_'
```
| closed | 2023-05-02T16:26:37Z | 2023-05-03T10:19:04Z | 2023-05-03T10:19:04Z | monetjoe |
1,692,709,351 | Dataset Viewer issue for krr-oxford/OntoLAMA | ### Link
https://huggingface.co/datasets/krr-oxford/OntoLAMA
### Description
The dataset viewer is not working for dataset krr-oxford/OntoLAMA.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-09-38--config-names-krr-oxford-OntoLAMA-1fc33233/downloads'
Traceback: The previous step failed, the error is copied to this step: kind='dataset-info' dataset='krr-oxford/OntoLAMA' config=None split=None---The previous step failed, the error is copied to this step: kind='/config-names' dataset='krr-oxford/OntoLAMA' config=None split=None---Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config_names.py", line 99, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1184, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 901, in get_module
local_path = self.download_loading_script()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 869, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 470, in get_from_cache
os.makedirs(cache_dir, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-09-38--config-names-krr-oxford-OntoLAMA-1fc33233/downloads'
```
| Dataset Viewer issue for krr-oxford/OntoLAMA: ### Link
https://huggingface.co/datasets/krr-oxford/OntoLAMA
### Description
The dataset viewer is not working for dataset krr-oxford/OntoLAMA.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-09-38--config-names-krr-oxford-OntoLAMA-1fc33233/downloads'
Traceback: The previous step failed, the error is copied to this step: kind='dataset-info' dataset='krr-oxford/OntoLAMA' config=None split=None---The previous step failed, the error is copied to this step: kind='/config-names' dataset='krr-oxford/OntoLAMA' config=None split=None---Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config_names.py", line 99, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1215, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1184, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 901, in get_module
local_path = self.download_loading_script()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 869, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 470, in get_from_cache
os.makedirs(cache_dir, exist_ok=True)
File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/datasets-server-cache/all/datasets/2023-05-02-16-09-38--config-names-krr-oxford-OntoLAMA-1fc33233/downloads'
```
| closed | 2023-05-02T16:16:00Z | 2023-05-17T09:25:55Z | 2023-05-17T09:25:55Z | Lawhy |
1,692,375,721 | Recompute all the cache entries with error code "UnexpectedError" | To be launched after https://github.com/huggingface/datasets-server/pull/1116 is merged, to recompute the erroneous UnexpectedError entries | Recompute all the cache entries with error code "UnexpectedError": To be launched after https://github.com/huggingface/datasets-server/pull/1116 is merged, to recompute the erroneous UnexpectedError entries | closed | 2023-05-02T12:56:54Z | 2023-05-02T13:41:21Z | 2023-05-02T13:38:04Z | severo |
1,692,109,421 | fix: 🐛 extend takes one argument (a list), not one arg per elem | null | fix: 🐛 extend takes one argument (a list), not one arg per elem: | closed | 2023-05-02T09:46:25Z | 2023-05-02T13:39:57Z | 2023-05-02T13:36:55Z | severo |
1,692,089,264 | The error is not reported as expected | See https://github.com/huggingface/datasets-server/issues/1110, for example:
the error, for the step `config-parquet-and-info`, is:
```
zstd decompress error: Frame requires too much memory for decoding
```
But we get:
```
list.extend() takes exactly one argument (7 given)
``` | The error is not reported as expected: See https://github.com/huggingface/datasets-server/issues/1110, for example:
the error, for the step `config-parquet-and-info`, is:
```
zstd decompress error: Frame requires too much memory for decoding
```
But we get:
```
list.extend() takes exactly one argument (7 given)
``` | closed | 2023-05-02T09:31:47Z | 2023-05-09T08:15:54Z | 2023-05-09T08:15:54Z | severo |
1,692,076,152 | Dataset Viewer issue for code_search_net | ### Link
https://huggingface.co/datasets/code_search_net
### Description
The dataset viewer is not working for dataset code_search_net.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for code_search_net: ### Link
https://huggingface.co/datasets/code_search_net
### Description
The dataset viewer is not working for dataset code_search_net.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-02T09:22:26Z | 2023-05-17T09:33:28Z | 2023-05-17T09:33:28Z | geraldlab |
1,692,007,838 | Add error detail in admin UI | null | Add error detail in admin UI: | closed | 2023-05-02T08:38:06Z | 2023-05-02T09:44:37Z | 2023-05-02T09:41:28Z | severo |
1,691,831,667 | Dataset Viewer issue for poloclub/diffusiondb | ### Link
https://huggingface.co/datasets/poloclub/diffusiondb
### Description
The dataset viewer is not working for dataset poloclub/diffusiondb.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for poloclub/diffusiondb: ### Link
https://huggingface.co/datasets/poloclub/diffusiondb
### Description
The dataset viewer is not working for dataset poloclub/diffusiondb.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-02T06:07:06Z | 2023-05-27T09:51:05Z | 2023-05-17T09:36:24Z | vkvickkey |
1,690,514,818 | Dataset Viewer issue for mehdie/sefaria | ### Link
https://huggingface.co/datasets/mehdie/sefaria
### Description
The dataset viewer is not working for dataset mehdie/sefaria.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for mehdie/sefaria: ### Link
https://huggingface.co/datasets/mehdie/sefaria
### Description
The dataset viewer is not working for dataset mehdie/sefaria.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-01T07:46:58Z | 2023-05-09T08:26:02Z | 2023-05-09T08:26:02Z | tomersagi |
1,690,497,990 | Dataset Viewer issue for eli5 | ### Link
https://huggingface.co/datasets/eli5
### Description
while calling this script "curl -X GET \
"https://datasets-server.huggingface.co/first-rows?dataset=eli5&config=LFQA_reddit&split=test_eli5""
I got this error "{"error":"list.extend() takes exactly one argument (7 given)"}"
| Dataset Viewer issue for eli5: ### Link
https://huggingface.co/datasets/eli5
### Description
while calling this script "curl -X GET \
"https://datasets-server.huggingface.co/first-rows?dataset=eli5&config=LFQA_reddit&split=test_eli5""
I got this error "{"error":"list.extend() takes exactly one argument (7 given)"}"
| closed | 2023-05-01T07:22:00Z | 2023-05-09T07:49:42Z | 2023-05-09T07:49:42Z | yhifny |
1,689,852,872 | feat: 🎸 restore backfill job as a cron job at 12:00 every day | null | feat: 🎸 restore backfill job as a cron job at 12:00 every day: | closed | 2023-04-30T09:51:46Z | 2023-04-30T09:55:09Z | 2023-04-30T09:52:27Z | severo |
1,688,895,284 | Aggregated config and dataset level for opt-in-out-urls-scan | Currently we have spawning urls scan numbers but at split level. This PR add config and dataset aggregation level (Only to summarize numbers). | Aggregated config and dataset level for opt-in-out-urls-scan: Currently we have spawning urls scan numbers but at split level. This PR add config and dataset aggregation level (Only to summarize numbers). | closed | 2023-04-28T17:26:43Z | 2023-05-03T16:22:04Z | 2023-05-03T16:19:11Z | AndreaFrancis |
1,688,837,343 | Some Doc tweaks | null | Some Doc tweaks: | closed | 2023-04-28T16:35:08Z | 2023-04-28T18:22:33Z | 2023-04-28T18:19:44Z | julien-c |
1,688,782,163 | Delete `dataset-split-names-from-streaming` job runner | Part of https://github.com/huggingface/datasets-server/issues/1086 | Delete `dataset-split-names-from-streaming` job runner: Part of https://github.com/huggingface/datasets-server/issues/1086 | closed | 2023-04-28T15:50:17Z | 2023-05-03T12:12:51Z | 2023-05-03T12:10:03Z | polinaeterna |
1,688,555,379 | feat: 🎸 change TTL for finished jobs from 7 days to 1 day | We need to delete the index before mongoengine creates it again with a new value
See https://github.com/huggingface/datasets-server/issues/1104 | feat: 🎸 change TTL for finished jobs from 7 days to 1 day: We need to delete the index before mongoengine creates it again with a new value
See https://github.com/huggingface/datasets-server/issues/1104 | closed | 2023-04-28T13:24:00Z | 2023-04-28T13:34:01Z | 2023-04-28T13:31:15Z | severo |
1,688,416,623 | Delete finished jobs immediately? | Currently, finished jobs are deleted after 7 days by an index. See https://github.com/huggingface/datasets-server/blob/259fd092c12d240d9b8d733c965c4b9362e90684/libs/libcommon/src/libcommon/queue.py#L144
But we never use the finished jobs, so:
- we could delete them immediately after finishing
- we could reduce the duration from 7 days to 1 hour (can be complementary to the previous action, to clean uncaught jobs)
For point 2, see https://github.com/huggingface/datasets-server/pull/1103
Stats:
- 9.805.591 jobs
- 13.345 are not finished! (0.1% of the jobs) | Delete finished jobs immediately?: Currently, finished jobs are deleted after 7 days by an index. See https://github.com/huggingface/datasets-server/blob/259fd092c12d240d9b8d733c965c4b9362e90684/libs/libcommon/src/libcommon/queue.py#L144
But we never use the finished jobs, so:
- we could delete them immediately after finishing
- we could reduce the duration from 7 days to 1 hour (can be complementary to the previous action, to clean uncaught jobs)
For point 2, see https://github.com/huggingface/datasets-server/pull/1103
Stats:
- 9.805.591 jobs
- 13.345 are not finished! (0.1% of the jobs) | closed | 2023-04-28T11:49:10Z | 2023-05-31T12:20:38Z | 2023-05-31T12:20:38Z | severo |
1,688,408,079 | fix: 🐛 revert to the previous value | When mongoengine viewed the new value (delete finished entries after 2 days instead of 7 days), it tried to create the index, but raised the following error:
```
pymongo.errors.OperationFailure: An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 172800 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 604800 }, full error: {'ok': 0.0, 'errmsg': 'An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 172800 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 604800 }', 'code': 85, 'codeName': 'IndexOptionsConflict'
```
Reverting for now. | fix: 🐛 revert to the previous value: When mongoengine viewed the new value (delete finished entries after 2 days instead of 7 days), it tried to create the index, but raised the following error:
```
pymongo.errors.OperationFailure: An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 172800 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 604800 }, full error: {'ok': 0.0, 'errmsg': 'An equivalent index already exists with the same name but different options. Requested index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 172800 }, existing index: { v: 2, key: { finished_at: 1 }, name: "finished_at_1", background: false, expireAfterSeconds: 604800 }', 'code': 85, 'codeName': 'IndexOptionsConflict'
```
Reverting for now. | closed | 2023-04-28T11:42:22Z | 2023-04-28T11:52:45Z | 2023-04-28T11:50:04Z | severo |
1,688,393,430 | feat: 🎸 add logs to backill job | Sorry... a lot of small PR. | feat: 🎸 add logs to backill job: Sorry... a lot of small PR. | closed | 2023-04-28T11:29:56Z | 2023-04-28T11:35:08Z | 2023-04-28T11:31:58Z | severo |
1,688,385,552 | feat: 🎸 add debug logs to backfill job | null | feat: 🎸 add debug logs to backfill job: | closed | 2023-04-28T11:23:26Z | 2023-04-28T11:29:15Z | 2023-04-28T11:26:09Z | severo |
1,688,364,010 | feat: 🎸 add an index to the jobs collection | proposed by mongo cloud | feat: 🎸 add an index to the jobs collection: proposed by mongo cloud | closed | 2023-04-28T11:06:48Z | 2023-04-28T11:27:01Z | 2023-04-28T11:24:20Z | severo |
1,688,332,517 | Upgrade datasets to 2.12.0 | https://github.com/huggingface/datasets/releases/tag/2.12.0 | Upgrade datasets to 2.12.0: https://github.com/huggingface/datasets/releases/tag/2.12.0 | closed | 2023-04-28T10:43:52Z | 2023-05-10T07:40:22Z | 2023-05-10T07:40:22Z | severo |
1,688,330,009 | Add logs and move backfill job to one shot job | null | Add logs and move backfill job to one shot job: | closed | 2023-04-28T10:41:50Z | 2023-04-28T10:49:35Z | 2023-04-28T10:46:41Z | severo |
1,688,303,179 | fix: 🐛 report backfill every 100 anaylized datasets | Also: only run backfill job every 6 hours | fix: 🐛 report backfill every 100 anaylized datasets: Also: only run backfill job every 6 hours | closed | 2023-04-28T10:22:51Z | 2023-04-28T10:27:36Z | 2023-04-28T10:24:57Z | severo |
1,688,206,258 | fix: 🐛 add missing environment variables for migration job | null | fix: 🐛 add missing environment variables for migration job: | closed | 2023-04-28T09:17:55Z | 2023-04-28T09:22:02Z | 2023-04-28T09:18:35Z | severo |
1,687,799,915 | Dataset Viewer issue for cbt | ### Link
https://huggingface.co/datasets/cbt
### Description
The dataset viewer is not working for dataset cbt.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| Dataset Viewer issue for cbt: ### Link
https://huggingface.co/datasets/cbt
### Description
The dataset viewer is not working for dataset cbt.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| closed | 2023-04-28T02:35:39Z | 2023-05-02T08:06:37Z | 2023-05-02T08:06:37Z | lowestbuaaer |
1,686,999,687 | More tests on dataset state | - changes in the Graph specification:
- `provides_dataset_config_names` field indicates if the step is providing the list of config names for a dataset
- `provides_config_split_names` field indicates if the step is providing the list of split names for a config
- `requires` field has been renamed to `triggered_by`: indeed, the relation between steps is that, when step B is `triggered_by` step A, if A has been updated, a new job will be created for step B.
- ProcessingGraph:
- the concept of ProcessingStep is simplified: it's a data class, and provides the name, job runner version, input type (as well as the job type and cache kind, which are equal to the processing step name)
- most of the methods are provided by ProcessingGraph, in particular those related to the edges, or to filtering nodes.
- the data structure is a [networkx.digraph](https://networkx.org/documentation/stable/reference/classes/digraph.html): the nodes are the step names, and we use the [attributes](https://networkx.org/documentation/stable/tutorial.html#node-attributes) to store some properties
- remove the `/admin/jobs_duration` endpoint, because the finished jobs are now deleted quickly, so: this statistic does not mean much now.
- add tests for state.py. In particular: use small processing graphs with the special cases (children, grand-children, multiple parents, fan-in, fan-out, parallel steps, etc). The "real" processing graph now only has one test. This should reduce code to change when a step is added, modified or deleted. | More tests on dataset state: - changes in the Graph specification:
- `provides_dataset_config_names` field indicates if the step is providing the list of config names for a dataset
- `provides_config_split_names` field indicates if the step is providing the list of split names for a config
- `requires` field has been renamed to `triggered_by`: indeed, the relation between steps is that, when step B is `triggered_by` step A, if A has been updated, a new job will be created for step B.
- ProcessingGraph:
- the concept of ProcessingStep is simplified: it's a data class, and provides the name, job runner version, input type (as well as the job type and cache kind, which are equal to the processing step name)
- most of the methods are provided by ProcessingGraph, in particular those related to the edges, or to filtering nodes.
- the data structure is a [networkx.digraph](https://networkx.org/documentation/stable/reference/classes/digraph.html): the nodes are the step names, and we use the [attributes](https://networkx.org/documentation/stable/tutorial.html#node-attributes) to store some properties
- remove the `/admin/jobs_duration` endpoint, because the finished jobs are now deleted quickly, so: this statistic does not mean much now.
- add tests for state.py. In particular: use small processing graphs with the special cases (children, grand-children, multiple parents, fan-in, fan-out, parallel steps, etc). The "real" processing graph now only has one test. This should reduce code to change when a step is added, modified or deleted. | closed | 2023-04-27T14:36:59Z | 2023-05-10T12:54:23Z | 2023-05-10T12:54:22Z | severo |
1,686,654,329 | Add migration for metrics | Removes the metrics about /parquet-and-dataset-info
see https://github.com/huggingface/datasets-server/pull/1043#issuecomment-1523353409 | Add migration for metrics: Removes the metrics about /parquet-and-dataset-info
see https://github.com/huggingface/datasets-server/pull/1043#issuecomment-1523353409 | closed | 2023-04-27T11:27:11Z | 2023-04-27T13:22:20Z | 2023-04-27T13:18:45Z | severo |
1,686,522,757 | Add WORKER_JOB_TYPES_BLOCKED | Now, we can list:
- the jobs types that a worker cannot process: `WORKER_JOB_TYPES_BLOCKED`
- the jobs types that a worker can process: `WORKER_JOB_TYPES_ONLY`
| Add WORKER_JOB_TYPES_BLOCKED: Now, we can list:
- the jobs types that a worker cannot process: `WORKER_JOB_TYPES_BLOCKED`
- the jobs types that a worker can process: `WORKER_JOB_TYPES_ONLY`
| closed | 2023-04-27T09:59:32Z | 2023-04-27T12:22:29Z | 2023-04-27T12:19:12Z | severo |
1,684,831,804 | Automatically collect all the migrations as manual listing of migration jobs in collector is not reliable | When creating a migration job, it should be first implemented in `jobs/mongodb_migration/src/mongodb_migration/migrations/` and then manually added to `jobs/mongodb_migration/src/mongodb_migration/collector.py` which is very easy to forget about (see https://github.com/huggingface/datasets-server/pull/1034, https://github.com/huggingface/datasets-server/pull/1043#discussion_r1177606263).
Would be good to implement automatic creation of a migration collector with all the migrations from `mongodb_migration/migrations` directory.
| Automatically collect all the migrations as manual listing of migration jobs in collector is not reliable: When creating a migration job, it should be first implemented in `jobs/mongodb_migration/src/mongodb_migration/migrations/` and then manually added to `jobs/mongodb_migration/src/mongodb_migration/collector.py` which is very easy to forget about (see https://github.com/huggingface/datasets-server/pull/1034, https://github.com/huggingface/datasets-server/pull/1043#discussion_r1177606263).
Would be good to implement automatic creation of a migration collector with all the migrations from `mongodb_migration/migrations` directory.
| closed | 2023-04-26T11:21:51Z | 2023-07-18T12:01:57Z | 2023-07-15T15:03:56Z | polinaeterna |
1,684,829,566 | feat: 🎸 use a common function to forge unique ids | Used for the Job unicity_id field, and for the artifact ids.
BREAKING CHANGE: 🧨 the job unicity_id has changed a bit
The only issue with the breaking change is that it could temporarily start twice the same job. Not a big issue, I think.
fixes #861 | feat: 🎸 use a common function to forge unique ids: Used for the Job unicity_id field, and for the artifact ids.
BREAKING CHANGE: 🧨 the job unicity_id has changed a bit
The only issue with the breaking change is that it could temporarily start twice the same job. Not a big issue, I think.
fixes #861 | closed | 2023-04-26T11:20:37Z | 2023-04-27T07:25:17Z | 2023-04-27T07:22:17Z | severo |
1,684,073,691 | Dataset Viewer issue for henri28/jason | ### Link
https://huggingface.co/datasets/henri28/jason
### Description
The dataset viewer is not working for dataset henri28/jason.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| Dataset Viewer issue for henri28/jason: ### Link
https://huggingface.co/datasets/henri28/jason
### Description
The dataset viewer is not working for dataset henri28/jason.
Error details:
```
Error code: ResponseAlreadyComputedError
```
| closed | 2023-04-26T01:00:38Z | 2023-04-26T14:02:02Z | 2023-04-26T08:49:51Z | hnrqpmntl |
1,683,943,100 | Spawning full scan | Closes https://github.com/huggingface/datasets-server/issues/1087
Included changes:
- Separate worker definition in chart for prod and dev in a dedicated one for spawning with max long running time set to 10 hrs
- Removing max rows scanning limitation
- Store in assets a csv file with list of opted in/out urls (Headers: url, column, row_idx, opt_in, opt_out)
| Spawning full scan: Closes https://github.com/huggingface/datasets-server/issues/1087
Included changes:
- Separate worker definition in chart for prod and dev in a dedicated one for spawning with max long running time set to 10 hrs
- Removing max rows scanning limitation
- Store in assets a csv file with list of opted in/out urls (Headers: url, column, row_idx, opt_in, opt_out)
| closed | 2023-04-25T22:07:13Z | 2023-10-10T13:29:53Z | 2023-05-03T17:43:38Z | AndreaFrancis |
1,683,470,189 | Full Spawning urls scan | Currently we have urls scan implemented for first 100K rows [here ](https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py)
We need to do it for full dataset but there could be some concerns:
- Full scan could take too much time for big datasets (Like [laion](https://huggingface.co/datasets/laion/laion2B-en)) and we have a task configured [here](https://github.com/huggingface/datasets-server/blob/bdda4ed42c3f884995917ae6e7575ef67e338f48/services/worker/src/worker/executor.py#L121) that kills long running jobs that can prevent Full scan to do not finish
- We have a response size validation [here](https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runner.py#L470) which can lead to a database error in case of having big content (Too much opt in/out urls) to store
Approach 1:
Run full dataset scan in one single job
Will need the following actions:
- Implement custom timeout configurations to kill "long running jobs" i.e We could have a WORKER_MAX_JOB_DURATION_SECONDS value for split-opt-in-out-urls-scan Job set up to 10 hrs and for other jobs 20 min (as it is currently)
- Store the content response (or maybe only opt in/out urls list) in an external storage. It was suggested by @severo to store it on a parquet file (see thread https://github.com/huggingface/datasets-server/pull/1044#discussion_r1173625223). I think it could be a good idea but will need to think where to store the file (Maybe https://datasets-server.huggingface.co/assets? Another branch in each repository like we actually do for parquets https://huggingface.co/datasets/<dataset>/tree/refs%2Fconvert%2Fparquet? )
- In case we dont need yet to show the list of opt in/out urls, we can skip previous step and only store number of urls see sample: https://datasets-server.huggingface.co/opt-in-out-urls-scan?dataset=laion/laion2B-en&config=laion--laion2B-en&split=train
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 0
num_opt_out_urls | 1853
num_urls | 100000
num_scanned_rows | 100000
has_urls_columns | true
</div>
</div>
Approach 2:
Run full scan in separated Jobs and store results in separated cache entries
Will need the following actions:
- Add support in current datasets-server architecture to have another level of granularity https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/processing_graph.py#L12 maybe "batch" that will have an start_row and an end_row or offset and limit. So we scan the dataset by batches and store its result in a cache entry for each batch. At the end we could have a "dataset" aggregated Job that will give us the summarized result of previous responses e.g:
**Batch one: offset:0 , limit:1000**
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 0
num_opt_out_urls | 10
num_urls | 1000
limit | 1000
offset | 0
</div>
</div>
**Batch two: offset:1000 , limit:1000**
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 4
num_opt_out_urls | 5
num_urls | 50
limit | 1000
offset | 1000
</div>
</div>
`
Aggregated response:
`
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 4 (0 + 4)
num_opt_out_urls | 15 (10 + 5)
num_urls | 1050 = (1000 from batch one and 50 from batch two)
</div>
</div>
- The only problem with this approach is that we could still have sometimes issues with db storage limitation (e.g a few opt in/out urls but really long text)
| Full Spawning urls scan : Currently we have urls scan implemented for first 100K rows [here ](https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py)
We need to do it for full dataset but there could be some concerns:
- Full scan could take too much time for big datasets (Like [laion](https://huggingface.co/datasets/laion/laion2B-en)) and we have a task configured [here](https://github.com/huggingface/datasets-server/blob/bdda4ed42c3f884995917ae6e7575ef67e338f48/services/worker/src/worker/executor.py#L121) that kills long running jobs that can prevent Full scan to do not finish
- We have a response size validation [here](https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runner.py#L470) which can lead to a database error in case of having big content (Too much opt in/out urls) to store
Approach 1:
Run full dataset scan in one single job
Will need the following actions:
- Implement custom timeout configurations to kill "long running jobs" i.e We could have a WORKER_MAX_JOB_DURATION_SECONDS value for split-opt-in-out-urls-scan Job set up to 10 hrs and for other jobs 20 min (as it is currently)
- Store the content response (or maybe only opt in/out urls list) in an external storage. It was suggested by @severo to store it on a parquet file (see thread https://github.com/huggingface/datasets-server/pull/1044#discussion_r1173625223). I think it could be a good idea but will need to think where to store the file (Maybe https://datasets-server.huggingface.co/assets? Another branch in each repository like we actually do for parquets https://huggingface.co/datasets/<dataset>/tree/refs%2Fconvert%2Fparquet? )
- In case we dont need yet to show the list of opt in/out urls, we can skip previous step and only store number of urls see sample: https://datasets-server.huggingface.co/opt-in-out-urls-scan?dataset=laion/laion2B-en&config=laion--laion2B-en&split=train
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 0
num_opt_out_urls | 1853
num_urls | 100000
num_scanned_rows | 100000
has_urls_columns | true
</div>
</div>
Approach 2:
Run full scan in separated Jobs and store results in separated cache entries
Will need the following actions:
- Add support in current datasets-server architecture to have another level of granularity https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/processing_graph.py#L12 maybe "batch" that will have an start_row and an end_row or offset and limit. So we scan the dataset by batches and store its result in a cache entry for each batch. At the end we could have a "dataset" aggregated Job that will give us the summarized result of previous responses e.g:
**Batch one: offset:0 , limit:1000**
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 0
num_opt_out_urls | 10
num_urls | 1000
limit | 1000
offset | 0
</div>
</div>
**Batch two: offset:1000 , limit:1000**
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 4
num_opt_out_urls | 5
num_urls | 50
limit | 1000
offset | 1000
</div>
</div>
`
Aggregated response:
`
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
<div class="toolbar"><div class="devtools-separator"></div><div class="devtools-searchbox"><input class="searchBox devtools-filterinput" placeholder="Filter JSON" value=""></div></div><div class="panelContent" id="json-scrolling-panel" tabindex="0">
key | value
-- | --
urls_columns | […]
~~opt_in_urls~~ | []
~~opt_out_urls~~ | […]
num_opt_in_urls | 4 (0 + 4)
num_opt_out_urls | 15 (10 + 5)
num_urls | 1050 = (1000 from batch one and 50 from batch two)
</div>
</div>
- The only problem with this approach is that we could still have sometimes issues with db storage limitation (e.g a few opt in/out urls but really long text)
| closed | 2023-04-25T16:11:45Z | 2024-06-19T14:09:59Z | 2024-06-19T14:09:59Z | AndreaFrancis |
1,683,360,970 | What's left in refactoring of processing steps and endpoints | Here I want to summarize once again the final changes that we need to make datasets-server consistent in naming and logic, see https://github.com/huggingface/datasets-server/issues/867 (because it's all scattered in comments to different issues)
## Rename steps
- [x] `/split-names-from-streaming` -> `config-split-names-from-streaming` https://github.com/huggingface/datasets-server/pull/1168
- [x] `/split-names-from-dataset-info` -> `config-split-names-from-info` https://github.com/huggingface/datasets-server/pull/1226
- [x] `/config-names` -> `dataset-config-names` https://github.com/huggingface/datasets-server/pull/1246
## Delete steps
- [x] `parquet-and-dataset-info` https://github.com/huggingface/datasets-server/pull/1043
- [x] `dataset-split-names-from-streaming` https://github.com/huggingface/datasets-server/pull/1106
- [x] `dataset-split-names-from-dataset-info` https://github.com/huggingface/datasets-server/pull/1141
## Endpoints
- [x] `/dataset-info` -> `/info` https://github.com/huggingface/datasets-server/pull/1468
- [x] delete `/parquet-and-dataset-info` https://github.com/huggingface/datasets-server/pull/1488
- [x] delete `/config-names`
## Misc
- [x] update the docs
- [x] update the OpenAPI spec. | What's left in refactoring of processing steps and endpoints: Here I want to summarize once again the final changes that we need to make datasets-server consistent in naming and logic, see https://github.com/huggingface/datasets-server/issues/867 (because it's all scattered in comments to different issues)
## Rename steps
- [x] `/split-names-from-streaming` -> `config-split-names-from-streaming` https://github.com/huggingface/datasets-server/pull/1168
- [x] `/split-names-from-dataset-info` -> `config-split-names-from-info` https://github.com/huggingface/datasets-server/pull/1226
- [x] `/config-names` -> `dataset-config-names` https://github.com/huggingface/datasets-server/pull/1246
## Delete steps
- [x] `parquet-and-dataset-info` https://github.com/huggingface/datasets-server/pull/1043
- [x] `dataset-split-names-from-streaming` https://github.com/huggingface/datasets-server/pull/1106
- [x] `dataset-split-names-from-dataset-info` https://github.com/huggingface/datasets-server/pull/1141
## Endpoints
- [x] `/dataset-info` -> `/info` https://github.com/huggingface/datasets-server/pull/1468
- [x] delete `/parquet-and-dataset-info` https://github.com/huggingface/datasets-server/pull/1488
- [x] delete `/config-names`
## Misc
- [x] update the docs
- [x] update the OpenAPI spec. | closed | 2023-04-25T15:08:39Z | 2023-07-31T20:54:25Z | 2023-07-31T20:54:25Z | polinaeterna |
1,683,152,586 | CI spawning check error for PR from fork: The provided API token is invalid | See: https://github.com/huggingface/datasets-server/actions/runs/4796071992/jobs/8531405657?pr=1084#logs
```
______________________ test_real_check_spawning_response _______________________
app_config = AppConfig(assets=AssetsConfig(base_url='http://localhost/assets', storage_directory=None), cache=CacheConfig(mongo_dat...per_batch=1000, spawning_token='dummy_spawning_token', max_concurrent_requests_number=100, max_requests_per_second=50))
@pytest.mark.asyncio
async def test_real_check_spawning_response(app_config: AppConfig) -> None:
semaphore = Semaphore(value=10)
limiter = AsyncLimiter(10, time_period=1)
headers = {"Authorization": f"API {CI_SPAWNING_TOKEN}"}
async with ClientSession(headers=headers) as session:
image_url = "http://testurl.test/test_image.jpg"
image_urls = [image_url]
spawning_url = app_config.urls_scan.spawning_url
spawning_response = await check_spawning(image_urls, session, semaphore, limiter, spawning_url)
assert spawning_response and isinstance(spawning_response, dict)
> assert spawning_response["urls"] and isinstance(spawning_response["urls"], list)
E KeyError: 'urls'
tests/job_runners/split/test_opt_in_out_urls_scan_from_streaming.py:309: KeyError
```
where `spawning_response` is:
```
{'detail': 'The provided API token is invalid.'}
``` | CI spawning check error for PR from fork: The provided API token is invalid: See: https://github.com/huggingface/datasets-server/actions/runs/4796071992/jobs/8531405657?pr=1084#logs
```
______________________ test_real_check_spawning_response _______________________
app_config = AppConfig(assets=AssetsConfig(base_url='http://localhost/assets', storage_directory=None), cache=CacheConfig(mongo_dat...per_batch=1000, spawning_token='dummy_spawning_token', max_concurrent_requests_number=100, max_requests_per_second=50))
@pytest.mark.asyncio
async def test_real_check_spawning_response(app_config: AppConfig) -> None:
semaphore = Semaphore(value=10)
limiter = AsyncLimiter(10, time_period=1)
headers = {"Authorization": f"API {CI_SPAWNING_TOKEN}"}
async with ClientSession(headers=headers) as session:
image_url = "http://testurl.test/test_image.jpg"
image_urls = [image_url]
spawning_url = app_config.urls_scan.spawning_url
spawning_response = await check_spawning(image_urls, session, semaphore, limiter, spawning_url)
assert spawning_response and isinstance(spawning_response, dict)
> assert spawning_response["urls"] and isinstance(spawning_response["urls"], list)
E KeyError: 'urls'
tests/job_runners/split/test_opt_in_out_urls_scan_from_streaming.py:309: KeyError
```
where `spawning_response` is:
```
{'detail': 'The provided API token is invalid.'}
``` | open | 2023-04-25T13:13:17Z | 2023-08-22T15:16:55Z | null | albertvillanova |
1,682,823,079 | Raise informative error when importing non-installed module | With this PR, we raise a specific informative error when a dataset script tries to import a module/library that is not installed.
Note that once we propagate error messages from the children jobs, we only need this message in the root job "/config-names".
Fix #1083. | Raise informative error when importing non-installed module: With this PR, we raise a specific informative error when a dataset script tries to import a module/library that is not installed.
Note that once we propagate error messages from the children jobs, we only need this message in the root job "/config-names".
Fix #1083. | closed | 2023-04-25T09:47:56Z | 2023-04-26T13:37:11Z | 2023-04-26T13:33:52Z | albertvillanova |
1,682,814,798 | No specific error when dataset tries to import a non-installed module | When a dataset script tries to import a module/library that is not installed, there is no informative error message.
See:
- #1067
- #1068
Related to:
- #976 | No specific error when dataset tries to import a non-installed module: When a dataset script tries to import a module/library that is not installed, there is no informative error message.
See:
- #1067
- #1068
Related to:
- #976 | closed | 2023-04-25T09:42:38Z | 2023-04-26T13:33:54Z | 2023-04-26T13:33:54Z | albertvillanova |
1,681,666,869 | Dataset Viewer issue for masakhane/afriqa | ### Link
https://huggingface.co/datasets/masakhane/afriqa
### Description
The dataset viewer is not working for dataset masakhane/afriqa.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for masakhane/afriqa: ### Link
https://huggingface.co/datasets/masakhane/afriqa
### Description
The dataset viewer is not working for dataset masakhane/afriqa.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-24T16:44:20Z | 2023-04-25T06:18:18Z | 2023-04-25T06:18:17Z | ToluClassics |
1,678,997,509 | Fix key name for split names | Split steps were not visible on dataset state | Fix key name for split names: Split steps were not visible on dataset state | closed | 2023-04-21T19:36:03Z | 2023-04-21T19:53:05Z | 2023-04-21T19:50:13Z | AndreaFrancis |
1,678,926,187 | Dataset Viewer issue for casey-martin/oa_cpp_annotate | ### Link
https://huggingface.co/datasets/casey-martin/oa_cpp_annotate
### Description
The dataset viewer is not working for dataset casey-martin/oa_cpp_annotate.
Error details:
```
Error code: ResponseNotReady
```
Do you have any recommendations for trouble-shooting or steps I could take to enable the data preview? Thanks. | Dataset Viewer issue for casey-martin/oa_cpp_annotate: ### Link
https://huggingface.co/datasets/casey-martin/oa_cpp_annotate
### Description
The dataset viewer is not working for dataset casey-martin/oa_cpp_annotate.
Error details:
```
Error code: ResponseNotReady
```
Do you have any recommendations for trouble-shooting or steps I could take to enable the data preview? Thanks. | closed | 2023-04-21T18:27:30Z | 2023-05-10T05:21:55Z | 2023-05-10T05:21:55Z | casey-martin |
1,678,760,013 | Params validation to job_runner | Closes https://github.com/huggingface/datasets-server/issues/1074 | Params validation to job_runner: Closes https://github.com/huggingface/datasets-server/issues/1074 | closed | 2023-04-21T16:06:14Z | 2023-05-04T20:59:35Z | 2023-05-04T20:59:29Z | AndreaFrancis |
1,678,682,542 | fix: 🐛 remove the wrong concept of blocked_by_parent | the relation created by "requires" means that the step will be refreshed when the required step will be refreshed. (should we change the name?). | fix: 🐛 remove the wrong concept of blocked_by_parent: the relation created by "requires" means that the step will be refreshed when the required step will be refreshed. (should we change the name?). | closed | 2023-04-21T15:12:14Z | 2023-04-21T15:29:09Z | 2023-04-21T15:25:55Z | severo |
1,678,584,203 | Update backfill job, and setup a cronjob in prod | null | Update backfill job, and setup a cronjob in prod: | closed | 2023-04-21T14:04:20Z | 2023-04-28T09:01:27Z | 2023-04-28T08:58:11Z | severo |
1,678,464,303 | test: 💍 make the e2e tests on API clearer | no need to test every endpoint on every auth case | test: 💍 make the e2e tests on API clearer: no need to test every endpoint on every auth case | closed | 2023-04-21T12:44:10Z | 2023-04-21T14:50:51Z | 2023-04-21T14:47:47Z | severo |
1,678,401,615 | refactor: 💡 create the configs at startup | instead of creating a FirstRowsConfig every time | refactor: 💡 create the configs at startup: instead of creating a FirstRowsConfig every time | closed | 2023-04-21T11:57:26Z | 2023-04-21T12:28:17Z | 2023-04-21T12:25:29Z | severo |
1,678,399,900 | Move dataset,config and split validation to job_runner instead of compute method | We validate the parameters `dataset, config and split` at each job runner level here:
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config_names.py#L109
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/parquet.py#L93
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/parquet_and_info.py#L934
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/size.py#L162
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/split_names_from_dataset_info.py#L108
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/split_names_from_streaming.py#L128
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/parquet.py#L141
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/is_valid.py#L60
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/size.py#L169
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/split_names.py#L152
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/split_names_from_dataset_info.py#L137
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/split_names_from_streaming.py#L153
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/first_rows_from_parquet.py#L315
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py#L490
But we could have it once before calling `compute` method https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runner.py#L466
Based on `self.processing_step.input,` we could perform the corresponding validation and throw the error before compute.
The change should be applied also in the job runners not listed here (Probable new ones after this issue was created) | Move dataset,config and split validation to job_runner instead of compute method: We validate the parameters `dataset, config and split` at each job runner level here:
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config_names.py#L109
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/parquet.py#L93
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/parquet_and_info.py#L934
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/size.py#L162
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/split_names_from_dataset_info.py#L108
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/config/split_names_from_streaming.py#L128
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/parquet.py#L141
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/is_valid.py#L60
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/size.py#L169
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/split_names.py#L152
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/split_names_from_dataset_info.py#L137
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/dataset/split_names_from_streaming.py#L153
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/first_rows_from_parquet.py#L315
- https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py#L490
But we could have it once before calling `compute` method https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runner.py#L466
Based on `self.processing_step.input,` we could perform the corresponding validation and throw the error before compute.
The change should be applied also in the job runners not listed here (Probable new ones after this issue was created) | closed | 2023-04-21T11:56:09Z | 2023-05-11T16:04:41Z | 2023-05-11T16:04:41Z | AndreaFrancis |
1,678,219,765 | Dataset Viewer issue for nyuuzyou/AnimeHeadsv3 | ### Link
https://huggingface.co/datasets/nyuuzyou/AnimeHeadsv3
### Description
The dataset viewer is not working for dataset nyuuzyou/AnimeHeadsv3.
Error details:
```
Error code: ResponseNotReady
```
The dataset status has been showing as "ResponseNotReady" for the past six days, and sometimes it briefly changes to "ResponseNotFound". I am not sure if this is a bug or if there is an issue with the dataset itself. | Dataset Viewer issue for nyuuzyou/AnimeHeadsv3: ### Link
https://huggingface.co/datasets/nyuuzyou/AnimeHeadsv3
### Description
The dataset viewer is not working for dataset nyuuzyou/AnimeHeadsv3.
Error details:
```
Error code: ResponseNotReady
```
The dataset status has been showing as "ResponseNotReady" for the past six days, and sometimes it briefly changes to "ResponseNotFound". I am not sure if this is a bug or if there is an issue with the dataset itself. | closed | 2023-04-21T09:40:12Z | 2023-04-21T15:13:51Z | 2023-04-21T15:13:50Z | nyuuzyou |
1,678,101,307 | Avoid disk storage issues | The assets, cached-assets (for /rows) and datasets library cache are stored on a disk (might be the same or not). When the total used size reaches a threshold, new jobs are not run (and the job queue increases).
But we don't check the inodes usage, which can also lead to "no space left" issues (occurred on 2023/04/21, see https://github.com/huggingface/datasets-server/issues/1071).
Some ideas we could implement.
Specific to inodes:
- add a metric about the inodes usage (and add a graph in grafana)
- add a test on the inodes usagebefore launching a new job
General:
- add alerts when disk related metrics reach a threshold
- have a periodical cleaning of the disk. Note that the disk was mostly filled by the datasets library cache for jobs that crashed (because otherwise the temporary directory is deleted at the end of the job). The directories names contain the date, so we can delete all the directories older than one week for example.
- use different disks for assets, cached-assets and cache (datasets library), and monitor them separately (and check them separately before launching a job)
- use S3, instead of a disk?
- move disk metrics to the metrics collector job, instead of checking on every call to /metrics
- add a check on the disk usage in cached-assets before storing new assets when /rows is called (even if we already have a mechanism to reduce the size) | Avoid disk storage issues: The assets, cached-assets (for /rows) and datasets library cache are stored on a disk (might be the same or not). When the total used size reaches a threshold, new jobs are not run (and the job queue increases).
But we don't check the inodes usage, which can also lead to "no space left" issues (occurred on 2023/04/21, see https://github.com/huggingface/datasets-server/issues/1071).
Some ideas we could implement.
Specific to inodes:
- add a metric about the inodes usage (and add a graph in grafana)
- add a test on the inodes usagebefore launching a new job
General:
- add alerts when disk related metrics reach a threshold
- have a periodical cleaning of the disk. Note that the disk was mostly filled by the datasets library cache for jobs that crashed (because otherwise the temporary directory is deleted at the end of the job). The directories names contain the date, so we can delete all the directories older than one week for example.
- use different disks for assets, cached-assets and cache (datasets library), and monitor them separately (and check them separately before launching a job)
- use S3, instead of a disk?
- move disk metrics to the metrics collector job, instead of checking on every call to /metrics
- add a check on the disk usage in cached-assets before storing new assets when /rows is called (even if we already have a mechanism to reduce the size) | closed | 2023-04-21T08:22:20Z | 2023-09-15T07:58:32Z | 2023-09-15T07:58:32Z | severo |
1,677,827,927 | Dataset Viewer issue for michaelwzhu/ChatMed-Datasets | ### Link
https://huggingface.co/datasets/michaelwzhu/ChatMed-Datasets
### Description
The dataset viewer is not working for dataset michaelwzhu/ChatMed-Datasets.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for michaelwzhu/ChatMed-Datasets: ### Link
https://huggingface.co/datasets/michaelwzhu/ChatMed-Datasets
### Description
The dataset viewer is not working for dataset michaelwzhu/ChatMed-Datasets.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-21T05:18:45Z | 2023-04-21T08:08:30Z | 2023-04-21T08:08:30Z | michael-wzhu |
1,676,967,488 | refactor: 💡 change name of parameter to be more precise | null | refactor: 💡 change name of parameter to be more precise: | closed | 2023-04-20T15:33:18Z | 2023-04-20T19:22:18Z | 2023-04-20T19:18:41Z | severo |
1,676,811,102 | feat: 🎸 create children jobs even in case of error | fixes #949, along with #1066 | feat: 🎸 create children jobs even in case of error: fixes #949, along with #1066 | closed | 2023-04-20T14:10:28Z | 2023-04-28T09:01:00Z | 2023-04-28T08:58:01Z | severo |
1,676,773,466 | Dataset Viewer issue for InstaDeepAI/multi_species_genomes | ### Link
https://huggingface.co/datasets/InstaDeepAI/multi_species_genomes
### Description
The dataset viewer is not working for dataset InstaDeepAI/multi_species_genomes.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for InstaDeepAI/multi_species_genomes: ### Link
https://huggingface.co/datasets/InstaDeepAI/multi_species_genomes
### Description
The dataset viewer is not working for dataset InstaDeepAI/multi_species_genomes.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-20T13:51:28Z | 2023-08-07T16:36:21Z | 2023-08-06T15:04:00Z | dallatt |
1,676,772,925 | Dataset Viewer issue for InstaDeepAI/human_reference_genome | ### Link
https://huggingface.co/datasets/InstaDeepAI/human_reference_genome
### Description
The dataset viewer is not working for dataset InstaDeepAI/human_reference_genome.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for InstaDeepAI/human_reference_genome: ### Link
https://huggingface.co/datasets/InstaDeepAI/human_reference_genome
### Description
The dataset viewer is not working for dataset InstaDeepAI/human_reference_genome.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-04-20T13:51:12Z | 2024-02-02T17:14:09Z | 2024-02-02T17:14:08Z | dallatt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.