nicholasKluge commited on
Commit
ae869ee
1 Parent(s): c95afe2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -167,6 +167,8 @@ for i, completion in enumerate(completions):
167
 
168
  ## Benchmarks
169
 
 
 
170
  | Models | Average | [ARC](https://arxiv.org/abs/1803.05457) | [Hellaswag](https://arxiv.org/abs/1905.07830) | [MMLU](https://arxiv.org/abs/2009.03300) | [TruthfulQA](https://arxiv.org/abs/2109.07958) |
171
  |-------------------------------------------------------------------------------------|---------|-----------------------------------------|-----------------------------------------------|------------------------------------------|------------------------------------------------|
172
  | [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) | 33.01 | 29.40 | 33.00 | 28.55 | 41.10 |
@@ -179,8 +181,6 @@ for i, completion in enumerate(completions):
179
  | [Gpt2-small](https://huggingface.co/gpt2) | 29.97 | 21.48* | 31.60* | 25.79* | 40.65* |
180
  | [Multilingual GPT](https://huggingface.co/ai-forever/mGPT) | 29.45 | 24.79 | 26.37* | 25.17* | 41.50 |
181
 
182
- - Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
183
-
184
  ## Fine-Tuning Comparisons
185
 
186
  | Models | Average | [IMDB](https://huggingface.co/datasets/christykoh/imdb_pt) | [FaQuAD-NLI](https://huggingface.co/datasets/ruanchaves/faquad-nli) | [HateBr](https://huggingface.co/datasets/ruanchaves/hatebr) | [Assin2](https://huggingface.co/datasets/assin2) | [AgNews](https://huggingface.co/datasets/maritaca-ai/ag_news_pt) |
 
167
 
168
  ## Benchmarks
169
 
170
+ Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
171
+
172
  | Models | Average | [ARC](https://arxiv.org/abs/1803.05457) | [Hellaswag](https://arxiv.org/abs/1905.07830) | [MMLU](https://arxiv.org/abs/2009.03300) | [TruthfulQA](https://arxiv.org/abs/2109.07958) |
173
  |-------------------------------------------------------------------------------------|---------|-----------------------------------------|-----------------------------------------------|------------------------------------------|------------------------------------------------|
174
  | [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) | 33.01 | 29.40 | 33.00 | 28.55 | 41.10 |
 
181
  | [Gpt2-small](https://huggingface.co/gpt2) | 29.97 | 21.48* | 31.60* | 25.79* | 40.65* |
182
  | [Multilingual GPT](https://huggingface.co/ai-forever/mGPT) | 29.45 | 24.79 | 26.37* | 25.17* | 41.50 |
183
 
 
 
184
  ## Fine-Tuning Comparisons
185
 
186
  | Models | Average | [IMDB](https://huggingface.co/datasets/christykoh/imdb_pt) | [FaQuAD-NLI](https://huggingface.co/datasets/ruanchaves/faquad-nli) | [HateBr](https://huggingface.co/datasets/ruanchaves/hatebr) | [Assin2](https://huggingface.co/datasets/assin2) | [AgNews](https://huggingface.co/datasets/maritaca-ai/ag_news_pt) |