pablo-rf commited on
Commit
78957ff
1 Parent(s): 8413d77

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -97,7 +97,7 @@ library_name: transformers
97
  <details>
98
  <summary>Click to expand</summary>
99
 
100
- - [Llama-3.1-Carballo](#llama31-carballo)
101
  - [Table of Contents](#table-of-contents)
102
  - [Model description](#model-description)
103
  - [Intended uses and limitations](#intended-uses-and-limitations)
@@ -120,7 +120,7 @@ library_name: transformers
120
  **Llama-3.1-Carballo** is a 8B-parameter transformer-based causal language model for Galician, Portuguese, Spanish, Catalan and English.
121
  It is the result of a continual pretraining of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) with a multilingual corpus of almost 20B tokens, with an emphasis on Galician texts.
122
 
123
- This model is part of the **Carballo familily**, a family of LLMs specialized in Galician. Smaller models can be founded [here](https://huggingface.co/collections/proxectonos/text-models-65d49fa54e358ce02a9699c8)
124
  ## Intended uses and limitations
125
 
126
  The **Llama-3.1-Carballo** model is ready-to-use only for causal language modeling.
@@ -193,7 +193,7 @@ The corpus is structured as follows:
193
  - num_epochs: 1.0
194
 
195
  ### Framework
196
- The traininf was conducted in the Galicia Supercomputing Center ([CESGA](https://www.cesga.es/en/home-2/)), using 5 nodes with 2 GPUs NVIDIA A100 each one.
197
 
198
  ## Evaluation
199
  In process...
 
97
  <details>
98
  <summary>Click to expand</summary>
99
 
100
+ - [Llama-3.1-Carballo](#llama-31-carballo)
101
  - [Table of Contents](#table-of-contents)
102
  - [Model description](#model-description)
103
  - [Intended uses and limitations](#intended-uses-and-limitations)
 
120
  **Llama-3.1-Carballo** is a 8B-parameter transformer-based causal language model for Galician, Portuguese, Spanish, Catalan and English.
121
  It is the result of a continual pretraining of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) with a multilingual corpus of almost 20B tokens, with an emphasis on Galician texts.
122
 
123
+ This model is part of the **Carballo familily**, a family of LLMs specialized in Galician. Smaller models can be found [here](https://huggingface.co/collections/proxectonos/text-models-65d49fa54e358ce02a9699c8)
124
  ## Intended uses and limitations
125
 
126
  The **Llama-3.1-Carballo** model is ready-to-use only for causal language modeling.
 
193
  - num_epochs: 1.0
194
 
195
  ### Framework
196
+ The training was conducted in the Galicia Supercomputing Center ([CESGA](https://www.cesga.es/en/home-2/)), using 5 nodes with 2 GPUs NVIDIA A100 each one.
197
 
198
  ## Evaluation
199
  In process...