The Dataset Viewer is not available on this dataset.

E621 2024 tags only in 10k tar

How to build the "dataset" with speed

> python extract-e621-parallel.py
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1000/1000 [2:30:51<00:00,  9.05s/it]
Extracted: 1000 iters
Delta: 744438 files
PS H:\e621_newest-webp-4Mpixel> node
Welcome to Node.js v20.15.0.
Type ".help" for more information.
> const fs = require('fs');
> console.log(fs.readdirSync("./khoyas_finetune").length);
8883320
  • (Done?) Finally, instead the official guide (a bit messy), follow this reddit post to make the metadata JSON file (with ARB) and start finetuning.

  • (Optional) Added meta_cap_dd.json to skip the preprocessing. This should be enough to start finetuning directly.

  • (In progress) You can use my training guide as reference. Note: Just the most simple approach. Good results requires DYOR.

python sdxl_train.py                                                                                                                         
    --in_json "meta_cap_dd.json"                                                                                   
    --train_data_dir="folder_extracted_dataset"                                                                          
    --output_dir="model_out_folder"
Downloads last month
68