aguila-7b / README.md
mapama247's picture
Update README.md
c6991a0
metadata
language:
  - en
  - es
  - ca
licence: apache-2.0
tags:
  - spanish
  - catalan
  - aguila-7b
datasets:
  - BSC-LT/open_data_26B_tokens_balanced_es_ca
metrics:
  - ppl
model-index:
  - name: aguila_7b
    results:
      - task:
          name: Causal Language Modeling
          type: text-generation
        metrics:
          - name: Perplexity
            type: ppl
            value: 8.59
widget:
  - text: |-
      Respòn a la pregunta següent.
      Pregunta: "Quina és la capital de Suècia?"
      Resposta: "La capital de Suècia és Estocolm."
      ----
      Respòn a la pregunta següent.
      Pregunta: "Quina beguda es consumeix als matins per despertar-se?"
      Resposta: "La majoria de gent consumeix cafè per despertar-se."
      ----
      Respòn a la pregunta següent.
      Pregunta: "Explica com funciona un motor de combustió"
      Resposta:
    example_title: Pregunta-Resposta
  - text: >-
      Extrae las entidades nombradas del siguiente texto:

      Texto: "Me llamo Wolfgang y vivo en Berlin"

      Entidades: Wolfgang:PER, Berlin:LOC

      ----

      Extrae las entidades nombradas del siguiente texto:

      Texto: "Hoy voy a visitar el parc güell tras salir del barcelona
      supercomputing center"

      Entidades: parc güell:LOC, barcelona supercomputing center:LOC

      ----

      Extrae las entidades nombradas del siguiente texto:

      Texto: "Maria y Miguel no tienen ningún problema contigo"

      Entidades: Maria:PER, Miguel:PER

      ----

      Extrae las entidades nombradas del siguiente texto:

      Texto: "Damián se cortó el pelo"

      Entidades: Damián:PER

      ----

      Extrae las entidades nombradas del siguiente texto:

      Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"

      Entidades: Pablo:PER, Barcelona:LOC

      ----

      Extrae las entidades nombradas del siguiente texto:

      Texto: "Carlos comparte piso con Marc"

      Entidades:
    example_title: Entidades-Nombradas
license: apache-2.0
pipeline_tag: text-generation

Ǎguila-7B

Table of Contents

Click to expand

Model description

The Ǎguila-7B is a transformer-based causal language model for Catalan, Spanish, and English. It is based on the Falcon-7B model and has been trained on a 26B token trilingual corpus collected from publicly available corpora and crawlers.

Intended uses and limitations

The Ǎguila-7B model is ready-to-use only for causal language modeling to perform text-generation tasks. However, it is intended to be fine-tuned on a generative downstream task.

How to use

Here is how to use this model:

import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

input_text = "Maria y Miguel no tienen ningún "
model = "projecte-aina/aguila-7b"
tokenizer = AutoTokenizer.from_pretrained(model)

pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
generation = pipeline(
    input_text,
    max_length=200,
    do_sample=True,
    top_k=10,
    eos_token_id=tokenizer.eos_token_id,
)

print(f"Result: {generation['generated_text']}")

Limitations and bias

At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.

Language adaptation

We adapted the original Falcon-7B model to Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer.

The adaptation procedure is explained in this blog.

Training

Training data

The training corpus consists 26B tokens of several corpora gathered from web crawlings and public domain data.

Dataset Language Tokens (per-epoch) Epochs
Wikipedia en 2169.97M 1.428144485
C4_es es 53709.80M 0.1049686196
Biomedical es 455.03M 0.7140722425
Legal es 995.70M 0.7140722425
Wikipedia es 693.60M 1.428144485
Gutenberg es 53.18M 0.7140722425
C4_ca ca 2826.00M 2.142216727
Biomedical ca 11.80M 1.428144485
RacoCatalá Noticias ca 17.16M 2.142216727
RacoCatalá Forums ca 333.73M 2.142216727
CaWaC ca 57.79M 2.142216727
Wikipedia ca 228.01M 3.570361212
Vilaweb ca 50.34M 2.142216727

The dataset has the following language distribution:

Language %
En 16.84%
Es 41.38%
Ca 41.79%

Training procedure

The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,257 tokens. Once the model has been successfully initialized, we continued its pre-training in the three target languages: Catalan, Spanish, and English. We kept a small amount of English data in order to avoid catastrophic forgetting. The training lasted a total of 320 hours on 8 NVIDIA H100 GPUs with 80GB RAM.

Training hyperparameters

  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • train_batch_size: 1
  • eval_batch_size: 1
  • total_train_batch_size: 8
  • total_eval_batch_size: 8
  • optimizer: Adam
  • betas=(0.9,0.999)
  • epsilon=1e-08
  • learning_rate: 5e-05
  • lr_scheduler_type: linear
  • num_epochs: 1.0

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.0
  • Datasets 2.13.1
  • Tokenizers 0.13.3

Additional information

Author

The Language Technologies Unit from Barcelona Supercomputing Center.

Contact

For further information, please send an email to langtech@bsc.es.

Copyright

Copyright (c) 2023 Langtech Unit at Barcelona Supercomputing Center.

License

Apache License, Version 2.0

Funding

This work was partially funded by:

Disclaimer

Click to expand

The model published in this repository is intended for a generalist purpose and is available to third parties.

This model may have biases and/or any other undesirable distortions.

When third parties deploy or provide systems and/or services to other parties using this model (or any systems based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties.