SultanR commited on
Commit
1b6e6ed
1 Parent(s): 6d88372

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -164,9 +164,13 @@ print(tokenizer.decode(outputs[0]))
164
  ```
165
 
166
  You can also use the model in llama.cpp through the [gguf version](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct-GGUF)!
 
167
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
 
168
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SultanR__SmolTulu-1.7b-Instruct)
169
 
 
 
170
  | Metric |Value|
171
  |-------------------|----:|
172
  |Avg. |15.45|
 
164
  ```
165
 
166
  You can also use the model in llama.cpp through the [gguf version](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct-GGUF)!
167
+
168
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
169
+
170
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SultanR__SmolTulu-1.7b-Instruct)
171
 
172
+ To give a more holistic overview, I also added the Open LLM Leaderboard results, which differ a lot from the script that was used to benchmark SmolLM2-Instruct.
173
+
174
  | Metric |Value|
175
  |-------------------|----:|
176
  |Avg. |15.45|