
robowaifudev/megatron-gpt2-345m
•
Updated
•
114
Error code: SplitsNamesError Exception: IndentationError Message: unindent does not match any outer indentation level (cc-stories.py, line 82) Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 119, in compute_splits_response split_items = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token) File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 78, in get_dataset_split_full_names for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 332, in get_dataset_config_names builder_cls = import_main_class(dataset_module.module_path) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 116, in import_main_class module = importlib.import_module(module_path) File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 846, in exec_module File "<frozen importlib._bootstrap_external>", line 983, in get_code File "<frozen importlib._bootstrap_external>", line 913, in source_to_code File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/tmp/modules-cache/datasets_modules/datasets/spacemanidol--cc-stories/4fb87e24aeaaf34d6d698fe0a9cc38540a64efc56a6fc540dca32690d25d59f2/cc-stories.py", line 82 def _split_generators(self, dl_manager): ^ IndentationError: unindent does not match any outer indentation level
Need help to make the dataset viewer work? Open an discussion for direct support.
This is a reproduction of the CC-stories dataset as it has been removed from its original source. To create this reproduction we process the English common crawl and only keep the top 0.1% of documents measured by their ngram overlap with a source document. The source document is created by joining the queries from PDP-60 and WSC273. Note, as the original dataset does not mention removing duplicate queries, neither do we.
Following the filtering to have top documents we filter to only contain those and produce the dataset which features 2,105,303 lines and 153,176,685 words.