Edit model card

Mula-4x160-v0.1

Mula

Model Summary

Mula is a series of Sparse Mixture of Experts (SMoE) language models, all trained natively in Brazilian Portuguese, designed to help democratize LLMs for low-resource languages.

Mula-4x160-v0.1 is our first experiment on pre-training a SMoE, using the Pt-Corpus-Instruct dataset. It has 4 experts per layer and activates 2 for each token.

Future versions of Mula will be trained on an extensively larger Brazilian Portuguese dataset.

Details

  • Architecture: a Sparse Mixture of Experts (Mixtral implementation) pre-trained via causal language modeling
  • Size: 407,820,288 parameters (only 237,950,976 activated parameters during runtime)
  • Context length: 2048 tokens
  • Dataset: Pt-Corpus Instruct (6.2B tokens)
  • Language: Portuguese
  • Training time: ~ 30 hours
  • Emissions: 7.6 KgCO2 (Germany)
  • Total energy consumption: 15 kWh

Intended Uses

The primary intended use of Mula-4x160-v0.1 is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt Mula-4x160-v0.1 for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained Mula-4x160-v0.1 as a basis for your fine-tuned model, please conduct your own risk and bias assessment.

Out-of-scope Use

Mula-4x160-v0.1 is not intended for deployment. It is not a product and should not be used for human-facing interactions.

Mula-4x160-v0.1 models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.

Mula-4x160-v0.1 has not been fine-tuned for downstream contexts in which language models are commonly deployed.

Basic usage

Using the pipeline:

from transformers import pipeline

generator = pipeline("text-generation", model="MulaBR/Mula-4x160-v0.1")

completions  = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)

for comp in completions:
  print(f"🤖 {comp['generated_text']}")

Using the AutoTokenizer and AutoModelForCausalLM:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load model and the tokenizer
tokenizer = AutoTokenizer.from_pretrained("MulaBR/Mula-4x160-v0.1", revision='main')
model = AutoModelForCausalLM.from_pretrained("MulaBR/Mula-4x160-v0.1", revision='main')

# Pass the model to your device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model.eval()
model.to(device)

# Tokenize the inputs and pass them to the device
inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)

# Generate some text
completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)

# Print the generated text
for i, completion in enumerate(completions):
    print(f'🤖 {tokenizer.decode(completion)}')

Limitations

Like almost all other language models trained on large text datasets scraped from the web, Mula-4x160-v0.1 exhibits behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:

  • Hallucinations: This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.

  • Biases and Toxicity: This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.

  • Unreliable Code: The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.

  • Language Limitations: The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.

  • Repetition and Verbosity: The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.

Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.

Benchmarks

Evaluations on benchmarks were performed using the Language Model Evaluation Harness (by EleutherAI). Laiviet translated the tasks from the LM-Evaluation-Harness we used.

ARC HellaSwag MMLU TruthfulQA
Mula-4x160-v0.1 27.09 31.41 28.15 39.81

Evaluations on Brazilian Portuguese benchmarks were performed using a Portuguese implementation of the EleutherAI LM Evaluation Harness (created by Eduardo Garcia).

ASSIN2 RTE ASSIN2 STS BLUEX ENEM FAQUAD NLI HateBR OAB Exams TweetSentBR
Mula-4x160-v0.1 33.57 11.35 25.17 21.34 43.97 41.50 25.06 11.24

Cite as 🤗


@misc{mula2024BR,
  title = {Mula: a Sparse Mixture of Experts Language Model trained in Brazilian Portuguese},
  author = {Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
  howpublished = {\url{https://huggingface.co/MulaBR}},
  year={2024}
}

License

Mula-4x160-v0.1 is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.

Downloads last month
750
Safetensors
Model size
417M params
Tensor type
F32
·

Dataset used to train MulaBR/Mula-4x160-v0.1

Space using MulaBR/Mula-4x160-v0.1 1

Evaluation results