File size: 9,443 Bytes
695a8f2
 
1f8a574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
695a8f2
1f8a574
 
 
 
 
 
 
 
 
8750b9c
1f8a574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82536c8
1f8a574
82536c8
 
 
1f8a574
 
 
 
 
 
 
 
82536c8
1f8a574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
---
license: mit
language:
- pt
tags:
- gervasio-pt*
- gervasio-ptpt
- gervasio-ptbr
- gervasio-ptpt-base
- gervasio-ptbr-base
- portulan
- albertina-pt*
- albertina-ptpt
- albertina-ptbr
- albertina-ptbr-nobrwac
- albertina-ptpt-base
- albertina-ptbr-base
- clm
- gpt
- portuguese
- decoder
- foundation model
datasets:
- PORTULAN/glue-ptpt
- PORTULAN/extraglue
---
</br>
</br>
<img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png">
<p style="text-align: center;">&nbsp;&nbsp;&nbsp;&nbsp;This is the model card for Gervásio 7B PT-BR Decoder. 
  You may be interested in some of the other models in the <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>.
</p>
</br>
</br>

# Gervásio 7B PT-BR

</br>

**Gervásio PT-*** is a **fully open** decoder for the **Portuguese language**. 


It is a **decoder** of the LLaMA family, based on the neural architecture Transformer and developed over the LLaMA~2 7B model.
Its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose.

It has different versions that were trained for different variants of Portuguese (PT), 
namely for the European variant, spoken in Portugal ([**gervasio-7b-portuguese-ptpt-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptpt-decoder)), and for the American variant, spoken in Brazil ([**gervasio-7b-portuguese-ptbr-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptbr-decoder)).

All versions of Gervásio are **openly distributed for free under an open license**, including thus for research and commercial purposes, and given its size, can 
be run on consumer-grade hardware.

**Gervásio 7B PT-BR** is developed by NLX-Natural Language and Speech Group, at the University of Lisbon, Faculty of Sciences, Department of Informatics, Portugal.

For the record, its full name is **Gervásio Produz Textos em Português**, to which corresponds the natural acronym **GPT PT**, 
and which is know tough more shortly as **Gervásio PT-***, or even more briefly just as **Gervásio**, among his acquaintances.

These models are fully documented in the respective [publication](https://arxiv.org/abs/?):

``` latex
@misc{albertina-pt,
      title={Advancing Generative AI for Portuguese with Open Decoder Gervásio~PT*}, 
      author={Rodrigo Santos, João Silva, Luís Gomes, João Rodrigues, António Branco},
      year={2024},
      eprint={?},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

Please use the above cannonical reference when using or citing this model.


<br>


# Model Description

**This model card is for Gervásio 7B PT-BR**, with 7 billion parameters, a hidden size of 4096 units, an intermediate size of 11,008 units, 32 attention heads, 32 hidden layers, and a tokenizer obtained using the Byte-Pair Encoding (BPE) algorithm implemented with SentencePiece, featuring a vocabulary size of 32,000.
Gervásio-7B-PTPT-Decoder is distributed under an [MIT license](https://huggingface.co/PORTULAN/albertina-ptpt/blob/main/LICENSE).


<br>

# Training Data

**Gervásio 7B PT-PT** was trained over standard supervised fine-tuning, and to keep some alignment with mainstream benchmarks for English, we resorted to tasks and respective datasets in the GLUE and the SuperGLUE collections.


We selected those datasets where the outcome of their machine translation into American Portuguese could preserve, in the target language, the linguistic properties at stake.

From GLUE, we resorted to the following four tasks:
- MRPC (paraphrase Detection).
- RTE (recognizing Textual Entailment).
- STS-B (semantic textual similarity).
- WNLI (coreference and natural language inference).

And from SuperGLUE, we included these other four tasks: 
- BoolQ (yes/no question answering).
- CB (inference with 3 labels).
- COPA (reasoning)
- MultiRC (question answering).


Instruction templates have been manually crafted for each task.
These take the various fields in the dataset and arrange them into a prompt.
These templates are listed in full detail in the [Extraglue dataset](https://huggingface.co/datasets/PORTULAN/extraglue).

# Training Details

We applied supervised fine-tuning with a causal language modeling (CLM) training objective following a zero-out technique during the fine-tuning process.
Specifically, while the entire prompt received attention during fine-tuning, only the response tokens were subjected to back-propagation.

In terms of hyper-parameters, both models were trained with a learning rate of 2 * 10^-5, a weight decay of 0.1, a two-epoch training regime without warm-up, and to ensure the same number of tokens back-propagated per step, we employed an input sequence of 512 tokens with a batch size of 16 and 16 accumulation steps.

Due to hardware limitations that imposed a shorter sequence length (512) compared to the base model (4096), instead of the typical practice of concatenating all training examples and then dividing them into batches with the same input sequence length, we separate each example individually.
In other words, each example occupies the full input sequence length.

To achieve this, we adapted the tokenizer of the base model to accept padding to allow grouping examples with different size into batches while preserving the original input sequence length.

For the model training process, we resorted to an a2-megagpu-16gb Google Cloud A2 VM, equipped with 16 GPUs, 96 vCPUs, and 1.360 GB of RAM. 
The training of each model took approximately two hours.

# Evaluation

For testing, we reserved the translated datasets MRPC (similarity) and RTE (inference), from GLUE, and COPA (reasoning/qa), from SuperGLUE, which were taking as representatives of three major types of tasks, and were not seen during training.
We also employ data augmentation techniques to enhance the size and diversity of our dataset.
This involves repurposing the tasks in various ways, such as generation of answers from MultiRC, question generation from BoolQ, and other relevant modifications.


| Model                    | MRPC (F1)      | RTE (F1)       | COPA (F1) |
|--------------------------|----------------|----------------|-----------|
| **Gervásio 7B PT-BR**    | **0.7822**     | **0.8321**     | 0.2134    | 
| **LLaMA 2**              | 0.0369         | 0.0516         | 0.4867    |
| **LLaMA 2 Chat**         | 0.5432         | 0.3807         | **0.5493**|
<br>

For further testing our decoder, in addition to the testing data described above, we also reused some of the datasets that had been resorted for American Portuguese to test the state-of-the-art Sabiá model and that were originally developed with materials from Portuguese: ASSIN2 RTE (entailment) and ASSIN2 STS (similarity), BLUEX (question answering), ENEM~2022 (question answering) and FaQuAD (extractive question-answering).

The scores of Sabiá invite to contrast them with Gervásio's but such comparison needs to be taken with some caution.
- First, these are a repetition of the scores presented in the respective paper, which only provide results for a single run of each task, while scores of Gervásio are the average of three runs, with different seeds. 
- Second, the evaluation methods adopted by Sabiá are *sui generis*, and different from the one's adopted for Gervásio. 
- Third, to evaluate Sabiá, the examples included in the few-shot prompt are hand picked, and identical for every test instance in each task.
To evaluate Gervásio, the examples were randomly selected to be included in the prompts.


| Model                    | ENEM 2022 (Accuracy) | BLUEX (Accuracy)| RTE (F1)  | STS (Pearson) |
|--------------------------|----------------------|-----------------|-----------|---------------|
| **Gervásio 7B PT-BR**    | 0.1977               | 0.2640          | **0.7469**| **0.2136**    |
| **LLaMA 2**              | 0.2458               | 0.2903          | 0.0913    | 0.1034        |
| **LLaMA 2 Chat**         | 0.2231               | 0.2959          | 0.5546    | 0.1750        |
||||||
| **Sabiá-7B**             | **0.6017**           | **0.7743**      | 0.6847    | 0.1363        |

<br>


# How to use

You can use this model directly with a pipeline for causal language modeling (CLM):

```python3
>>> from transformers import pipeline
>>> generator = pipeline(model='PORTULAN/gervasio-7b-portuguese-ptbr-decoder')
>>> generator("A música brasileira é", max_new_tokens=10)
[{'generated_text': 'A música brasileira é uma das mais ricas do mundo'}]



```
<br>

# Acknowledgments

The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
grant PINFRA/22117/2016; research project GPT-PT - Transformer-based Decoder for the Portuguese Language, funded by FCT—Fundação para a Ciência e Tecnologia under the
grant CPCA-IAC/AV/478395/2022; innovation project 
ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação 
under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, 
call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização.