nicholasKluge commited on
Commit
b995a94
1 Parent(s): e35d098

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -50,7 +50,7 @@ Also, TeenyTinyLlama models were trained by leveraging [scaling laws](https://ar
50
  - **Context length:** 2048 tokens
51
  - **Dataset:** [Portuguese-Corpus-v3](https://huggingface.co/datasets/nicholasKluge/portuguese-corpus-v3) (6.2B tokens)
52
  - **Language:** Portuguese
53
- - **Number of steps:** 457,969 (3.7B tokens)
54
  - **GPU:** 1 NVIDIA A100-SXM4-40GB
55
  - **Training time**: ~ 36 hours
56
  - **Emissions:** 5.6 KgCO2 (Germany)
@@ -178,7 +178,7 @@ for i, completion in enumerate(completions):
178
  | [Bloom-560m](https://huggingface.co/bigscience/bloom-560m)* | 32.13 | 24.74 | 37.15 | 24.22 | 42.44 |
179
  | [Multilingual GPT](https://huggingface.co/ai-forever/mGPT)* | 28.73 | 23.81 | 26.37 | 25.17 | 39.62 |
180
 
181
- * Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness. The results of models marked with an "*" were retirved from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
182
 
183
  ## Fine-Tuning Comparisons
184
 
 
50
  - **Context length:** 2048 tokens
51
  - **Dataset:** [Portuguese-Corpus-v3](https://huggingface.co/datasets/nicholasKluge/portuguese-corpus-v3) (6.2B tokens)
52
  - **Language:** Portuguese
53
+ - **Number of steps:** 457,969
54
  - **GPU:** 1 NVIDIA A100-SXM4-40GB
55
  - **Training time**: ~ 36 hours
56
  - **Emissions:** 5.6 KgCO2 (Germany)
 
178
  | [Bloom-560m](https://huggingface.co/bigscience/bloom-560m)* | 32.13 | 24.74 | 37.15 | 24.22 | 42.44 |
179
  | [Multilingual GPT](https://huggingface.co/ai-forever/mGPT)* | 28.73 | 23.81 | 26.37 | 25.17 | 39.62 |
180
 
181
+ - Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness. The results of models marked with an "*" were retirved from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
182
 
183
  ## Fine-Tuning Comparisons
184