--- language: - pt license: apache-2.0 library_name: transformers tags: - text-generation-inference datasets: - nicholasKluge/Pt-Corpus-Instruct metrics: - perplexity pipeline_tag: text-generation widget: - text: A PUCRS é uma universidade example_title: Exemplo - text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de example_title: Exemplo - text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para example_title: Exemplo inference: parameters: repetition_penalty: 1.2 temperature: 0.2 top_k: 20 top_p: 0.2 max_new_tokens: 150 co2_eq_emissions: emissions: 7.6 source: CodeCarbon training_type: pre-training geographical_location: Germany hardware_used: NVIDIA A100-SXM4-40GB model-index: - name: Mula-4x160-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 21.34 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 25.17 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 25.06 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 33.57 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 11.35 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 43.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 41.5 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 22.99 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia/tweetsentbr_fewshot split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 11.24 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard --- # Mula-4x160-v0.1 Mula ## Model Summary Mula is a series of Sparse Mixture of Experts (SMoE) language models, all trained natively in Brazilian Portuguese, designed to help democratize LLMs for low-resource languages. Mula-4x160-v0.1 is one of our first experiments on pre-training a SMoE, using the [Pt-Corpus-Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) dataset. It has 4 experts per layer and activates 2 for each token. Future versions of Mula will be trained on an extensively larger Brazilian Portuguese dataset. ## Details - **Architecture:** a Sparse Mixture of Experts (Mixtral implementation) pre-trained via causal language modeling - **Size:** 407,820,288 parameters (only 237,950,976 activated parameters during runtime) - **Context length:** 2048 tokens - **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens) - **Language:** Portuguese - **Training time**: ~ 30 hours - **Emissions:** 7.6 KgCO2eq (Germany) - **Total energy consumption:** 15 kWh ## Intended Uses The primary intended use of Mula-4x160-v0.1 is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt Mula-4x160-v0.1 for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained Mula-4x160-v0.1 as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ## Out-of-scope Use Mula-4x160-v0.1 is not intended for deployment. It is not a product and should not be used for human-facing interactions. Mula-4x160-v0.1 models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages. Mula-4x160-v0.1 has not been fine-tuned for downstream contexts in which language models are commonly deployed. ## Basic usage Using the `pipeline`: ```python from transformers import pipeline generator = pipeline("text-generation", model="MulaBR/Mula-4x160-v0.1") completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100) for comp in completions: print(f"🤖 {comp['generated_text']}") ``` Using the `AutoTokenizer` and `AutoModelForCausalLM`: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load model and the tokenizer tokenizer = AutoTokenizer.from_pretrained("MulaBR/Mula-4x160-v0.1", revision='main') model = AutoModelForCausalLM.from_pretrained("MulaBR/Mula-4x160-v0.1", revision='main') # Pass the model to your device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.eval() model.to(device) # Tokenize the inputs and pass them to the device inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device) # Generate some text completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100) # Print the generated text for i, completion in enumerate(completions): print(f'🤖 {tokenizer.decode(completion)}') ``` ## Limitations Like almost all other language models trained on large text datasets scraped from the web, Mula-4x160-v0.1 exhibits behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following: - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination. - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities. - **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions. - **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response. - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given. Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model. ## Benchmarks Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. | | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | |----------------------|-----------|---------------|-----------|----------------| | **Mula-4x160-v0.1** | 27.09 | 31.41 | 28.15 | 39.81 | | **Mula-8x160-v0.1** | 26.15 | 33.06 | 28.14 | 41.69 | Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)). | | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **PT Hate Speech** | **OAB Exams** | **TweetSentBR** | |-----------------------|----------------|----------------|-----------|----------|----------------|------------|--------------------|---------------|-----------------| | **Mula-4x160-v0.1** | 33.57 | 11.35 | 25.17 | 21.34 | 43.97 | 41.50 | 22.99 | 25.06 | 11.24 | | **Mula-8x160-v0.1** | 33.51 | 0 | 20.17 | 19.94 | 43.97 | 33.33 | 42.69 | 24.37 | 24.60 | ## Cite as 🤗 ```latex @misc{mula2024BR, title = {Mula: a Sparse Mixture of Experts Language Model trained in Brazilian Portuguese}, author = {Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza}, howpublished = {\url{https://huggingface.co/MulaBR}}, year={2024} } ``` ## License Mula-4x160-v0.1 is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. ## Acknowledgements The authors gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by the [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing & Analytics Lab.