nox
The nox project is a set of tools that make it easy to use various fine tuning technologies using solar models. We constructed ko data using grammatically accurate data.(It's not perfect, but I tried my best.) And we created nox-solar model using a fine-tuning technique(sft,dpo) Our model, the nox-solar model, ranked first on the Open Ko-LLM Leaderboard.
Currently, we are planning to make all code and datasets public.
Through this, users are expected to be able to freely conduct research and development using Nox.
Model Details
- Model Developers : davidkim(changyeon kim)
- Repository : https://github.com/davidkim205/nox
- base mode : Edentns/DataVortexS-10.7B-dpo-v1.11
- dpo dataset : davidkim205/kollm-comparision
- evalution : kollm_evalution
- evalution dataset : open-ko-llm-leaderboard datasets
Evaluation
The Open Ko-LLM Leaderboard
Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|---|
davidkim205/nox-solar-10.7b-v2 | 65.38 | 73.46 | 67.32 | 58.7 | 71.94 | 55.49 |
kollm_evalution
model | Average | Ko-TruthfulQA_mc1 | Ko-MMLU | Ko-HellaSwag | Ko-CommonGen V2 | Ko-ARC-Easy | kobest | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg | kobest_wic |
---|---|---|---|---|---|---|---|---|---|---|---|---|
davidkim205/nox-solar-10.7b-v2 | 66.68 | 55.2 | 46.39 | 84.99 | 85.98 | 68.17 | 59.33 | 50.71 | 75.5 | 59 | 94.46 | 48.81 |
- Downloads last month
- 2,602
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.