Edit model card

Gen Settings & Prompting

https://rentry.org/tsukasamodel

GPTQ

2048 sequence length

wikitext

Training

axolotl was used for training on a 8x nvidia a100 gpu cluster.

the a100 GPU cluster has been graciously provided by lloorree.

rank 8 qlora (all modules) tune

base model alpindale/goliath-120b tuned on koishi commit 6e675d1 for one epoch

then tuned on pippa 6412b0c for one epoch (metharme completion)

then tuned on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-10-19 for 2 epochs in metharme completion format

Downloads last month
6

Datasets used to train ludis/tsukasa-120b-qlora-gptq

Collection including ludis/tsukasa-120b-qlora-gptq