I ram this script:
find -type f|while read a; do cat "$a"|grep oid|cut -d" " -f2|sponge "$a"; done
jdupes . -r
which toldnme that these files are identical.
./kl-f8-anime2.vae.pt
./vaedelicatecolcors_v10.pt
./babes_11.vae.pt
./vae-ft-mse-840000-ema-pruned.ckpt
./Anything-V3.0.vae.pt
./Counterfeit-V2.5.vae.pt
./NovelAI-vae via hf.co_Reviem/diffusion_pytorch_model.bin
If you're storing on cloud storage, you can deduplicate with shortcuts (google drive) or notes; on linux you can use hard links (rm vaedelicatecolors_v10.pt && ln kl-f8-anime2.vae.pt vaedelicatecolors_v10.pt
. if youre storing in a git repo, you can just use a sparse checkout like I do (the lfs backend will dedupe serverside because thats how it works, at least on HF)