Edit model card

Kaori-34B-v1 Model Card

This Model was Finetuned By Kaeri and Jenti.

Datasets Strategy

  • Open-Platypus
  • Dolphin

We trained the model with 100% Open-Platypus data and 5% Dolphin data and applied SFT strategy.

We did not use GSM8k samples when generating data. Also we were careful of data contamination by similarity filtering the training data if the data correspond to any of the following list.

filtering_tasks = [
    'cot_gsm8k',
    'cot_gsm8k_ii',
    'drop:2.0.0',
    'winogrande:1.1.0'
    'task228_arc_answer_generation_easy',
    'ai2_arc/ARC-Challenge:1.0.0',
    'ai2_arc/ARC-Easy:1.0.0',
    'task229_arc_answer_generation_hard',
    'hellaswag:1.1.0', 
    'task1389_hellaswag_completion'
]

Framework:

Parameters:

  • Finetune_Type : LoRA
  • GPUs : A100x4(80GB)
  • Epochs : 3
  • Batchsize : 8
Downloads last month
676
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.