Edit model card

PlatYi-34B-200K-Q

Model Details

Model Developers Kyujin Han (kyujinpy)

Input Models input text only.

Output Models generate text only.

Model Architecture
PlatYi-34B-200K-Q is an auto-regressive language model based on the Yi-34B transformer architecture.

Blog Link
Blog: [Coming soon...]
Github: [Coming soon...]

Base Model
01-ai/Yi-34B-200K

Training Dataset
garage-bAInd/Open-Platypus.

Notice
While training, I used QLoRA.
But, lora_r values is 64.

Model Benchmark

Open leaderboard

  • Follow up as link.
Model Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
PlatYi-34B-200K-Q 62.00 63.91 83.52 75.19 44.21 81.06 24.11
PlatYi-34B-Q 69.86 66.89 85.14 77.66 53.03 82.48 53.98
01-ai/Yi-34B 69.42 64.59 85.69 76.35 56.23 83.03 50.64
01-ai/Yi-34B-200K 70.81 65.36 85.58 76.06 53.64 82.56 61.64

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/PlatYi-34B-200K-Q"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 62.00
AI2 Reasoning Challenge (25-Shot) 63.91
HellaSwag (10-Shot) 83.52
MMLU (5-Shot) 75.19
TruthfulQA (0-shot) 44.21
Winogrande (5-shot) 81.06
GSM8k (5-shot) 24.11
Downloads last month
3,036
Safetensors
Model size
34.4B params
Tensor type
FP16
·

Dataset used to train kyujinpy/PlatYi-34B-200K-Q

Evaluation results