--- model-index: - name: EEVE-Math-10.8B results: - task: type: text-generation dataset: name: gsm8k-ko type: gsm8k metrics: - name: pass@1 type: pass@1 value: 0.539 verified: false base_model: yanolja/EEVE-Korean-10.8B-v1.0 --- # EEVE-Math-10.8B-SFT 이 모델은 [Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf)및 [DARE](https://arxiv.org/abs/2311.03099)의 개념과 이를 활용한 내용을 포함하고 있습니다. | Model | gsm8k-ko(pass@1) | |---|---| | Base | 0.4049 | | SFT Epoch 1 | 0.508 | | SFT Epoch 2(M1) | **0.539** | | SFT -> KTO(M2) | - | | SFT -> KTO -> KTO(final) | - | ## Specifications SFT(M1) 단계 ## Base Model [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) ## Dataset [orca-math-word-problems-193k-korean](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean) ## Evaluation [gsm8k-ko](https://huggingface.co/datasets/kuotient/gsm8k-ko), kobest ``` git clone https://github.com/kuotient/lm-evaluation-harness.git cd lm-evaluation-harness pip install -e . ``` ``` lm_eval --model hf \ --model_args pretrained=yanolja/EEVE-Korean-Instruct-2.8B-v1.0 \ --tasks gsm8k-ko \ --device cuda:0 \ --batch_size auto:4 ``` | Model | gsm8k(pass@1) | boolq(acc) | copa(acc) | hellaswag(acc) | Overall | |---|---|---|---|---|---| | yanolja/EEVE-Korean-10.8B-v1.0 | 0.4049 | - | - | - | - | - | | yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 0.4511 | **0.8668** | **0.7450** | **0.4940** | 0.6392 | | **EEVE-Math-10.8B** | **0.5390** | 0.8027 | 0.7260 | 0.4760 | 0.6359 | | [**EEVE-Instruct-Math-10.8B**](https://huggingface.co/kuotient/EEVE-Instruct-Math-10.8B) | 0.4951 | 0.8283 | 0.7500 | 0.4880 | **0.6403** |