The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'train', 'path': ['data/ArXiv/train/*.parquet', 'data/BookCorpus2/train/*.parquet', 'data/Books3/train/*.arrow', 'data/DM Mathematics/train/*.parquet', 'data/Enron Emails/train/*.parquet', 'data/EuroParl/train/*.parquet', 'data/FreeLaw/train/*.parquet', 'data/Github/train/*.parquet', 'data/Gutenberg (PG-19)/train/*.parquet', 'data/HackerNews/train/*.parquet', 'data/NIH ExPorter/train/*.parquet', 'data/OpenSubtitles/train/*.parquet', 'data/OpenWebText2/train/*.parquet', 'data/PhilPapers/train/*.parquet', 'data/Pile-CC/train/*.parquet', 'data/PubMed Abstracts/train/*.parquet', 'data/PubMed Central/train/*.parquet', 'data/StackExchange/train/*.parquet', 'data/UPSTO Backgrounds/train/*.parquet', 'data/Ubuntu IRC/train/*.parquet', 'data/Wikipedia (en)/train/*.parquet', 'data/YoutubeSubtitles/train/*.parquet'], 'default': True}]
Examples of data_files in YAML:

   data_files: data.csv

   data_files: data/*.png

   data_files:
    - part0/*
    - part1/*

   data_files:
    - split: train
      path: train/*
    - split: test
      path: test/*

   data_files:
    - split: train
      path:
      - train/part1/*
      - train/part2/*
    - split: test
      path: test/*

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1038, in get_module
                  metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 184, in from_dataset_card_data
                  cls._raise_if_data_files_field_not_valid(metadata_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 170, in _raise_if_data_files_field_not_valid
                  raise ValueError(yaml_error_message)
              ValueError: 
              Expected data_files in YAML to be either a string or a list of strings
              or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'train', 'path': ['data/ArXiv/train/*.parquet', 'data/BookCorpus2/train/*.parquet', 'data/Books3/train/*.arrow', 'data/DM Mathematics/train/*.parquet', 'data/Enron Emails/train/*.parquet', 'data/EuroParl/train/*.parquet', 'data/FreeLaw/train/*.parquet', 'data/Github/train/*.parquet', 'data/Gutenberg (PG-19)/train/*.parquet', 'data/HackerNews/train/*.parquet', 'data/NIH ExPorter/train/*.parquet', 'data/OpenSubtitles/train/*.parquet', 'data/OpenWebText2/train/*.parquet', 'data/PhilPapers/train/*.parquet', 'data/Pile-CC/train/*.parquet', 'data/PubMed Abstracts/train/*.parquet', 'data/PubMed Central/train/*.parquet', 'data/StackExchange/train/*.parquet', 'data/UPSTO Backgrounds/train/*.parquet', 'data/Ubuntu IRC/train/*.parquet', 'data/Wikipedia (en)/train/*.parquet', 'data/YoutubeSubtitles/train/*.parquet'], 'default': True}]
              Examples of data_files in YAML:
              
                 data_files: data.csv
              
                 data_files: data/*.png
              
                 data_files:
                  - part0/*
                  - part1/*
              
                 data_files:
                  - split: train
                    path: train/*
                  - split: test
                    path: test/*
              
                 data_files:
                  - split: train
                    path:
                    - train/part1/*
                    - train/part2/*
                  - split: test
                    path: test/*

Need help to make the dataset viewer work? Open a discussion for direct support.

README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card