hello, sorry for this question, although hakurei-san already answered at hugging face about full/fp32/fp16 still slight different between each other, what does full-opt mean?
the full-opt model has all of the optimizer weights from training. It's to be used for training purposes only. More documentation will be released on the 8th of October.
Does the full-opt model have better outputs than the normal ones used for inference?
Are these optimizer weights good for concepts training?