E621 2024 tags only in 10k tar
Dedicated dataset to align both NebulaeWis/e621-2024-webp-4Mpixel and deepghs/e621_newest-webp-4Mpixel.
How to use / why I create this: my speedrun to build the dataset
How to build the "dataset" with speed
Get at least 4TB of storage, and around 75GB of RAM. Always make a venv / conda environment for each task.
(Optional) Download this directly: posts-2024-04-07.parquet and table.parquet
Download all 10k tarfile with webp via dl-e621-hfhub-nw.py and dl-e621-hfhub-dgs.py
Rerun that script for this repo (another 10k tarfile).
(Optional) Otherwise build this dataset via metadata-e621-tags-parallel.py
Run extract-e621-parallel.py to extract all tars into a single directory.
> python extract-e621-parallel.py
100%|ββββββββββββββββββββββββββββββββββββββ| 1000/1000 [2:30:51<00:00, 9.05s/it]
Extracted: 1000 iters
Delta: 744438 files
PS H:\e621_newest-webp-4Mpixel> node
Welcome to Node.js v20.15.0.
Type ".help" for more information.
> const fs = require('fs');
> console.log(fs.readdirSync("./khoyas_finetune").length);
8883320
(Done?) Finally, instead the official guide (a bit messy), follow this reddit post to make the metadata JSON file (with ARB) and start finetuning.
(Optional) Added meta_cap_dd.json to skip the preprocessing. This should be enough to start finetuning directly.
(In progress) You can use my training guide as reference. Note: Just the most simple approach. Good results requires DYOR.
python sdxl_train.py
--in_json "meta_cap_dd.json"
--train_data_dir="folder_extracted_dataset"
--output_dir="model_out_folder"
- Downloads last month
- 68