The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 269, in get
                  result = next(queryset)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 1608, in __next__
                  raw_doc = next(self._cursor)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pymongo/cursor.py", line 1267, in next
                  raise StopIteration
              StopIteration
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 539, in get_response_with_details
                  CachedResponseDocument.objects(kind=kind, dataset=dataset, config=config, split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 272, in get
                  raise queryset._document.DoesNotExist(msg)
              libcommon.simple_cache.DoesNotExist: CachedResponseDocument matching query does not exist.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute
                  compute_split_names_from_info_response(
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 588, in get_previous_step_or_raise
                  response = get_response_with_details(kind=kind, dataset=dataset, config=config, split=split)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 552, in get_response_with_details
                  raise CachedArtifactNotFoundError(kind=kind, dataset=dataset, config=config, split=split) from e
              libcommon.simple_cache.CachedArtifactNotFoundError: Cache entry does not exist: kind='config-info' dataset='bigdata-pw/civitai' config='default' split=None
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 99, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<Denoising strength: string, Hires prompt: struct<dvr-cl-dvsn: string>, Hires steps: string, Hires upscale: string, Hires upscaler: string, Model: string, Model hash: string, Old prompt editing timelines: string, Size: string, Version: string, cfgScale: int64, clipSkip: int64, hashes: struct<lora:dvr-cl-dvsn: string, model: string>, negativePrompt: string, prompt: string, resources: list<item: struct<hash: string, name: string, type: string, weight: double>>, sampler: string, seed: int64, steps: int64> output fields: struct<Denoising strength: string, Hires prompt: struct<dvr-cl-dvsn: string>, Hires steps: string, Hires upscale: string, Hires upscaler: string, Model: string, Model hash: string, Old prompt editing timelines: string, Size: string, Version: string, cfgScale: int64, clipSkip: int64, hashes: struct<lora:dvr-cl-dvsn: string, model: string>, negativePrompt: string, prompt: string, resources: list<item: struct<hash: string, name: string, type: string, weight: double>>, sampler: string, seed: int64, steps: int64, Created Date: string, civitaiResources: list<item: struct<modelVersionId: int64, modelVersionName: string, type: string, weight: int64>>>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Civitai Images

Images+metadata from Civitai

Stats:

  • ~4.1M

Formats:

  • WebDataset
    • 10k per shard, ~2GB
    • jpg + json
    • __key__ is Civitai image id

Notes

  • ~464k images with no meta field are excluded, this is ~10% of images collected
  • Files for some entries are actually videos, these will be released separately
  • Civitai extract metadata on upload, the exact fields in meta will depend on the UI used, some are common e.g. prompt, others are UI specific
  • Includes reaction data

another BIG data banger straight from the underground

with thanks to Civitai and their community ❤️

Downloads last month
212

Collection including bigdata-pw/civitai