Files changed (1) hide show
  1. README.md +39 -19
README.md CHANGED
@@ -35,34 +35,54 @@ We then compare likelihoods of each letter (`A, B, C, D, E`) and calculate the f
35
 
36
  GPT-like models were evaluated by taking top 20 probabilities of the first output token, which were further filtered for letters `A` to `E`. Letter with the highest probability was taken as a final answer.
37
 
38
- Exact code for the task will be posted [here]().
39
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## Evaluation results
42
- | Model |Accuracy| |Stderr|
43
- |-------|-------:|--|-----:|
44
- |GPT-4-0125-preview|0.9199|±|0.002|
45
- |GPT-4o-2024-05-13|0.9196|±|0.0017|
46
- |GPT-3.5-turbo-0125|0.8245|±|0.0016|
47
- |[Tito-7B-slerp](https://huggingface.co/Stopwolf/Tito-7B-slerp)|0.7099|±|0.0101|
48
- [Qwen2-7B-instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)|0.6730|±|0.0105|
49
- |[Yugo60-GPT](https://huggingface.co/datatab/Yugo60-GPT)|0.6411|±|0.0107|
50
- |[Llama3-70B-Instruct (4bit)](https://huggingface.co/unsloth/llama-3-70b-Instruct-bnb-4bit)|0.5942|±| 0.011|
51
- |[Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)|0.5852|±|0.011|
52
- |[Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|0.5274|±|0.0111|
53
- |[Starling-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)|0.5244|±|0.0112|
54
- |[Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|0.5145|±|0.0112|
55
- |[Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)|0.4506|±|0.0111|
56
- |[Perucac-7B-slerp](https://huggingface.co/Stopwolf/Perucac-7B-slerp)|0.4247|±|0.011|
57
- |[SambaLingo-Serbian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Serbian-Chat)|0.2802|±|0.01|
58
- |[Gemma-2-9B-it](https://huggingface.co/google/gemma-2-9b-it)|0.2193|±|0.0092|
 
 
 
 
 
 
 
 
 
 
59
 
60
 
61
  ### Citation
62
  ```
63
  @article{oz-eval,
64
  author = "Stanivuk Siniša & Đorđević Milena",
65
- title = "Opšte znanje LLM Eval",
66
  year = "2024"
67
  howpublished = {\url{https://huggingface.co/datasets/DjMel/oz-eval}},
68
  }
 
35
 
36
  GPT-like models were evaluated by taking top 20 probabilities of the first output token, which were further filtered for letters `A` to `E`. Letter with the highest probability was taken as a final answer.
37
 
38
+ Exact code for the task can be found as a PR [here](https://github.com/huggingface/lighteval/pull/225).
39
 
40
+ You should run the evaluation with the following command (do not forget to add --use_chat_template):
41
+ ```
42
+ accelerate launch lighteval/run_evals_accelerate.py \
43
+ --model_args "pretrained={MODEL_NAME},trust_remote_code=True" \
44
+ --use_chat_template \
45
+ --tasks "community|serbian_evals:oz_task|0|0" \
46
+ --custom_tasks "/content/lighteval/community_tasks/oz_evals.py" \
47
+ --output_dir "./evals" \
48
+ --override_batch_size 32
49
+ ```
50
 
51
  ## Evaluation results
52
+ | Model |Size|Accuracy| |Stderr|
53
+ |-------|---:|-------:|--|-----:|
54
+ |GPT-4-0125-preview|_???_|0.9199|±|0.002|
55
+ |GPT-4o-2024-05-13|_12B_|0.9196|±|0.0017|
56
+ |GPT-3.5-turbo-0125|_20B_|0.8245|±|0.0016|
57
+ |GPT-4o-mini-2024-07-18|_???_|0.7971|±|0.0005|
58
+ |[Mustra-7B-Instruct-v0.2](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.2)|7B|0.7388|± |0.0098|
59
+ |[Tito-7B-slerp](https://huggingface.co/Stopwolf/Tito-7B-slerp)|7B|0.7099|±|0.0101|
60
+ |[Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT)|7B|0.6889|± |0.0103|
61
+ |[Zamfir-7B-slerp](https://huggingface.co/Stopwolf/Zamfir-7B-slerp)|7B|0.6849|± |0.0104|
62
+ |[Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|12.2B|0.6839|± |0.0104|
63
+ [Qwen2-7B-instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)|7B|0.6730|±|0.0105|
64
+ |[Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)|8B|0.661|± |0.0106|
65
+ |[Yugo60-GPT](https://huggingface.co/datatab/Yugo60-GPT)|7B|0.6411|±|0.0107|
66
+ |[DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat)|15.7B|0.6047|±|0.0109|
67
+ |[Llama3-70B-Instruct (4bit)](https://huggingface.co/unsloth/llama-3-70b-Instruct-bnb-4bit)|70B|0.5942|±| 0.011|
68
+ |[Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)|8B|0.5852|±|0.011|
69
+ |[Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)|7B|0.5753|±| 0.011|
70
+ |[openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)|8B|0.5513|±|0.0111|
71
+ |[Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|8B|0.5274|±|0.0111|
72
+ |[Starling-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)|7B|0.5244|±|0.0112|
73
+ |[Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|7B|0.5145|±|0.0112|
74
+ |[Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)|1.5B|0.4506|±|0.0111|
75
+ |[Perucac-7B-slerp](https://huggingface.co/Stopwolf/Perucac-7B-slerp)|7B|0.4247|±|0.011|
76
+ |[Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)|3.8B|0.3719|±|0.0108|
77
+ |[SambaLingo-Serbian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Serbian-Chat)|7B|0.2802|±|0.01|
78
+ |[Gemma-2-9B-it](https://huggingface.co/google/gemma-2-9b-it)|9B|0.2193|±|0.0092|
79
 
80
 
81
  ### Citation
82
  ```
83
  @article{oz-eval,
84
  author = "Stanivuk Siniša & Đorđević Milena",
85
+ title = "OZ Eval: Measuring University Level General Knowledge of LLMs in Serbian Language",
86
  year = "2024"
87
  howpublished = {\url{https://huggingface.co/datasets/DjMel/oz-eval}},
88
  }