File size: 13,351 Bytes
8cae39d 68070eb 8cae39d 256a981 8cae39d fbeda48 8cae39d 348faa6 fd63d93 8cae39d 913d629 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 |
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
datasets:
- nicholasKluge/Pt-Corpus-Instruct
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: A PUCRS é uma universidade
example_title: Exemplo
- text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de
example_title: Exemplo
- text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 20
top_p: 0.2
max_new_tokens: 150
co2_eq_emissions:
emissions: 110000
source: CodeCarbon
training_type: pre-training
geographical_location: Germany
hardware_used: NVIDIA A40
model-index:
- name: Mula-8x160-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 20.5
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 21.28
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 26.65
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 22.38
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 4.73
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 33.33
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 40.21
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 18.46
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-8x160-v0.1
name: Open Portuguese LLM Leaderboard
---
# Mula-8x160-v0.1
<img src="./logo-no-bg.png" alt="Mula" height="200">
## Model Summary
Mula is a series of Sparse Mixture of Experts (SMoE) language models, all trained natively in Brazilian Portuguese, designed to help democratize LLMs for low-resource languages.
Mula-8x160-v0.1 is one of our first experiments on pre-training a SMoE, using the [Pt-Corpus-Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) dataset. It has 8 experts per layer and activates 4 for each token.
Future versions of Mula will be trained on an extensively larger Brazilian Portuguese dataset.
## Details
- **Architecture:** a Sparse Mixture of Experts (Mixtral implementation) pre-trained via causal language modeling
- **Size:** 747,596,544 parameters (only 407,857,152 activated parameters during runtime)
- **Context length:** 2048 tokens
- **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens x 4)
- **Language:** Portuguese
- **Training time**: ~ 136 hours
- **Emissions:** 110 KgCO2eq (Germany)
- **Total energy consumption:** 300 kWh
## Intended Uses
The primary intended use of Mula-8x160-v0.1 is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt Mula-8x160-v0.1 for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained Mula-8x160-v0.1 as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
Mula-8x160-v0.1 is not intended for deployment. It is not a product and should not be used for human-facing interactions.
Mula-8x160-v0.1 models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
Mula-8x160-v0.1 has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Basic usage
Using the `pipeline`:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MulaBR/Mula-8x160-v0.1")
completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)
for comp in completions:
print(f"🤖 {comp['generated_text']}")
```
Using the `AutoTokenizer` and `AutoModelForCausalLM`:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and the tokenizer
tokenizer = AutoTokenizer.from_pretrained("MulaBR/Mula-8x160-v0.1", revision='main')
model = AutoModelForCausalLM.from_pretrained("MulaBR/Mula-8x160-v0.1", revision='main')
# Pass the model to your device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
model.to(device)
# Tokenize the inputs and pass them to the device
inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)
# Generate some text
completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)
# Print the generated text
for i, completion in enumerate(completions):
print(f'🤖 {tokenizer.decode(completion)}')
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, Mula-8x160-v0.1 exhibits behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Benchmarks
Evaluations happen at every 7,000 steps. The model was trained for 4 epochs. Every step (batch size of 128) equals 262,144 tokens, and every checkpoint totals 1.8B tokens.
| Step | Perplexity | Evaluation Loss | Energy Consumption | Emissions |
|-------|------------|-----------------|--------------------|-----------|
| 7000 | 21.43 | 3.06 | 22.30 | 8.15 |
| 14000 | 15.84 | 2.76 | 44.58 | 16.29 |
| 21000 | 13.82 | 2.62 | 66.86 | 24.43 |
| 28000 | 12.70 | 2.54 | 89.18 | 32.59 |
| 35000 | 11.98 | 2.48 | 111.50 | 40.75 |
| 42000 | 11.42 | 2.43 | 133.83 | 48.91 |
| 49000 | 11.01 | 2.39 | 156.17 | 57.07 |
| 56000 | 10.66 | 2.36 | 178.64 | 65.28 |
| 63000 | 10.36 | 2.33 | 200.93 | 73.43 |
| 70000 | 10.12 | 2.31 | 223.24 | 81.59 |
| 77000 | 10.01 | 2.30 | 245.56 | 89.74 |
| 84000 | 9.91 | 2.294 | 267.90 | 97.91 |
| 91000 | 9.88 | 2.290 | 290.26 | 106.08 |
| 94805 | 9.88 | 2.290 | 302.39 | 110.52 |
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used.
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** |
|----------------------|-----------|---------------|-----------|----------------|
| **Mula-4x160-v0.1** | 27.09 | 31.41 | 28.15 | 39.81 |
| **Mula-8x160-v0.1** | 26.15 | 33.06 | 28.14 | 41.69 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **PT Hate Speech** | **OAB Exams** | **TweetSentBR** |
|-----------------------|----------------|----------------|-----------|----------|----------------|------------|--------------------|---------------|-----------------|
| **Mula-4x160-v0.1** | 33.57 | 11.35 | 25.17 | 21.34 | 43.97 | 41.50 | 22.99 | 25.06 | 11.24 |
| **Mula-8x160-v0.1** | 22.38 | 4.73 | 21.28 | 20.50 | 43.97 | 33.33 | 40.21 | 26.65 | 18.46 |
## Cite as 🤗
```latex
@misc{mula2024BR,
title = {Mula: a Sparse Mixture of Experts Language Model trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
howpublished = {\url{https://huggingface.co/MulaBR}},
year={2024}
}
```
## License
Mula-8x160-v0.1 is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
## Acknowledgements
The authors gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by the [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing & Analytics Lab.
|