nicholasKluge commited on
Commit
31d7b7e
1 Parent(s): 660e27e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -24
README.md CHANGED
@@ -59,6 +59,18 @@ This repository contains a version of [TeenyTinyLlama-460m](https://huggingface.
59
 
60
  This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model.
61
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  ## Usage
63
 
64
  The following special tokens are used to mark the user side of the interaction and the model's response:
@@ -100,47 +112,79 @@ The model will output something like:
100
 
101
  ## Limitations
102
 
 
 
103
  - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
104
 
105
  - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
106
 
107
  - **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
108
 
109
- - **Language Limitations:** The model is primarily designed to understand standard Portuguese (BR). Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
110
 
111
  - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
  ## Benchmarks
115
 
116
- Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
117
-
118
- | Models | Average | [ARC](https://arxiv.org/abs/1803.05457) | [Hellaswag](https://arxiv.org/abs/1905.07830) | [MMLU](https://arxiv.org/abs/2009.03300) | [TruthfulQA](https://arxiv.org/abs/2109.07958) |
119
- |-------------------------------------------------------------------------------------|---------|-----------------------------------------|-----------------------------------------------|------------------------------------------|------------------------------------------------|
120
- | [Pythia-410m](https://huggingface.co/EleutherAI/pythia-410m-deduped) | 33.26 | 24.83* | 41.29* | 25.99* | 40.95* |
121
- | [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) | 33.01 | 29.40 | 33.00 | 28.55 | 41.10 |
122
- | [Bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 32.13 | 24.74* | 37.15* | 24.22* | 42.44* |
123
- | [Xglm-564M](https://huggingface.co/facebook/xglm-564M) | 31.97 | 25.56 | 34.64* | 25.18* | 42.53 |
124
- | [OPT-350m](https://huggingface.co/facebook/opt-350m) | 31.78 | 23.55* | 36.73* | 26.02* | 40.83* |
125
- | [TeenyTinyLlama-160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m) | 31.16 | 26.15 | 29.29 | 28.11 | 41.12 |
126
- | [Pythia-160m](https://huggingface.co/EleutherAI/pythia-160m-deduped) | 31.16 | 24.06* | 31.39* | 24.86* | 44.34* |
127
- | [OPT-125m](https://huggingface.co/facebook/opt-125m) | 30.80 | 22.87 | 31.47 | 26.02 | 42.87 |
128
- | [Gpt2-portuguese-small](https://huggingface.co/pierreguillou/gpt2-small-portuguese) | 30.22 | 22.48* | 29.62* | 27.36* | 41.44* |
129
- | [Gpt2-small](https://huggingface.co/gpt2) | 29.97 | 21.48* | 31.60* | 25.79* | 40.65* |
130
- | [Multilingual GPT](https://huggingface.co/ai-forever/mGPT) | 29.45 | 24.79 | 26.37* | 25.17* | 41.50 |
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
132
  ## Cite as 🤗
133
 
134
  ```latex
135
 
136
- @misc{nicholas22llama,
137
- doi = {10.5281/zenodo.6989727},
138
- url = {https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m},
139
- author = {Nicholas Kluge Corrêa},
140
- title = {TeenyTinyLlama},
141
- year = {2023},
142
- publisher = {HuggingFace},
143
- journal = {HuggingFace repository},
144
  }
145
 
146
  ```
 
59
 
60
  This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model.
61
 
62
+ ## Intended Uses
63
+
64
+ The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
65
+
66
+ ## Out-of-scope Use
67
+
68
+ TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
69
+
70
+ TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
71
+
72
+ TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
73
+
74
  ## Usage
75
 
76
  The following special tokens are used to mark the user side of the interaction and the model's response:
 
112
 
113
  ## Limitations
114
 
115
+ Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
116
+
117
  - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
118
 
119
  - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
120
 
121
  - **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
122
 
123
+ - **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
124
 
125
  - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
126
 
127
+ Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
128
+
129
+ ## Evaluations
130
+
131
+ During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
132
+
133
+ | Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
134
+ |------------------|------------|---------------------------|----------------------|
135
+ | 8.1M | 20.49 | 9.40 | 3.34 |
136
+ | 1.6B | 16.90 | 18.82 | 6.70 |
137
+ | 2.4B | 15.43 | 28.59 | 10.16 |
138
+ | 3.2B | 14.64 | 38.20 | 13.57 |
139
+ | 4.0B | 14.08 | 48.04 | 17.07 |
140
+ | 4.9B | 13.61 | 57.74 | 20.52 |
141
+ | 5.7B | 13.25 | 67.32 | 23.92 |
142
+ | 6.5B | 12.87 | 76.84 | 27.30 |
143
+ | 7.3B | 12.57 | 86.40 | 30.70 |
144
+ | 8.1B | 12.27 | 96.19 | 34.18 |
145
+ | 9.0B | 11.96 | 106.06 | 37.70 |
146
+ | 9.8B | 11.77 | 115.69 | 41.31 |
147
 
148
  ## Benchmarks
149
 
150
+ Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
151
+
152
+ | | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
153
+ |------------------|-----------|---------------|-----------|----------------|-------------|
154
+ | Pythia-410m | 24.83* | **41.29*** | 25.99* | 40.95* | 33.26 |
155
+ | **TTL-460m** | **29.40** | 33.00 | **28.55** | 41.10 | 33.01 |
156
+ | Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
157
+ | Xglm-564M | 25.56 | 34.64* | 25.18* | **42.53** | 31.97 |
158
+ | OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
159
+ | **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
160
+ | Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
161
+ | OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
162
+ | GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
163
+ | Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
164
+ | Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
165
+
166
+ ## Fine-Tuning Comparisons
167
+
168
+ To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
169
+
170
+ | Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
171
+ |-----------------|-----------|------------|-----------|-----------|-----------|---------|
172
+ | BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
173
+ | BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
174
+ | **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
175
+ | **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
176
+
177
+ All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
178
 
179
  ## Cite as 🤗
180
 
181
  ```latex
182
 
183
+ @misc{correa24ttllama,
184
+ title = {TeenyTinyLlama: a pair of open-source tiny language models trained in Brazilian Portuguese},
185
+ author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
186
+ journal={arXiv},
187
+ year = {2024},
 
 
 
188
  }
189
 
190
  ```