AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v1.11
베이스 모델 : 42dot/42dot_LLM-SFT-1.3B
학습 데이터 : 자체 제작한 Open Orca 스타일 데이터셋 약 48,000건 (중복 제거 및 데이터 분포 조정)
학습 방법 : Full finetuning
epoch : 3
ko-lm-evaluation-harness(5-shot)
kobest_boolq | kobest_copa | kobest_hellaswag | pawsx_ko |
---|---|---|---|
0.52065527065527 | 0.721 | 0.466 | 0.5475 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
- Downloads last month
- 1,684
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.