Kaori-34b-v2 Model Card
This Model was Finetuned By Kaeri and Jenti.
Datasets
- Open-Platypus
- Dolphin
- OpenOrca
We trained the model with 100% Open-Platypus data, 5% Dolphin data and 10% OpenOrca data and applied SFT strategy.
We did not use GSM8k samples when generating data. Also we were careful of data contamination by similarity filtering the training data if the data correspond to any of the following list.
filtering_tasks = [ 'cot_gsm8k', 'cot_gsm8k_ii', 'drop:2.0.0', 'winogrande:1.1.0' 'task228_arc_answer_generation_easy', 'ai2_arc/ARC-Challenge:1.0.0', 'ai2_arc/ARC-Easy:1.0.0', 'task229_arc_answer_generation_hard', 'hellaswag:1.1.0', 'task1389_hellaswag_completion' ]
Framework:
Parameters:
- Finetune_Type : LoRA
- GPUs : A100x4(80GB)
- Epochs : 3
- Batchsize : 8
- Downloads last month
- 1,355
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.