--- model-index: - name: EEVE-Math-10.8B results: - task: type: text-generation dataset: name: gsm8k-ko type: gsm8k metrics: - name: pass@1 type: pass@1 value: 0.539 verified: false base_model: yanolja/EEVE-Korean-10.8B-v1.0 license: cc-by-sa-4.0 language: - ko tags: - math datasets: - kuotient/orca-math-word-problems-193k-korean --- # EEVE-Math-10.8B `EEVE-Math` 프로젝트는 - Orca-Math-200k 번역 ([Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf)) - gsm8k 번역, lm_eval 활용 - Mergekit을 이용한 dare-ties 사용 ([DARE](https://arxiv.org/abs/2311.03099)) 에 대한 내용을 포괄하고 있습니다. > 이 모델은 orca-math-word-problems-193k-korean 데이터셋을 이용하여 학습되었습니다. 응답 중 일부는 LaTeX 형식을 이용하여 결과를 반환하지만, 완성된 형식이 아닐 수 있습니다. 현재 M1 stage까지 진행되었습니다. | Model | gsm8k-ko(pass@1) | |---|---| | Base | 0.4049 | | SFT(M1) | 0.508 | | SFT(M1) -> SFT | **0.539** | | SFT(M1) -> KTO(M2) | - | | SFT -> KTO(M2) -> KTO(final) | - | ## Specifications - SFT(M1) -> SFT 단계 ## Base Model [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) ## Dataset [orca-math-word-problems-193k-korean](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean) ## Evaluation [gsm8k-ko](https://huggingface.co/datasets/kuotient/gsm8k-ko), kobest ``` git clone https://github.com/kuotient/lm-evaluation-harness.git cd lm-evaluation-harness pip install -e . ``` ``` lm_eval --model hf \ --model_args pretrained=yanolja/EEVE-Korean-Instruct-2.8B-v1.0 \ --tasks gsm8k-ko \ --device cuda:0 \ --batch_size auto:4 ``` | Model | gsm8k(pass@1) | boolq(acc) | copa(acc) | hellaswag(acc) | Overall | |---|---|---|---|---|---| | yanolja/EEVE-Korean-10.8B-v1.0 | 0.4049 | - | - | - | - | - | | yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 0.4511 | **0.8668** | **0.7450** | 0.4940 | 0.6392 | | **EEVE-Math-10.8B** | **0.5390** | 0.8027 | 0.7260 | 0.4760 | 0.6359 | | [**EEVE-Instruct-Math-10.8B**](https://huggingface.co/kuotient/EEVE-Instruct-Math-10.8B) | 0.4845 | 0.8519 | 0.7410 | **0.4980** | **0.6439** |