File size: 8,806 Bytes
695a8f2
 
1f8a574
 
 
 
 
 
2143e61
 
1f8a574
 
 
 
 
 
 
 
 
52385fb
695a8f2
1f8a574
 
 
159dafc
1f8a574
 
 
 
 
159dafc
1f8a574
 
 
b1dd777
1f8a574
 
0c4c294
e22229e
24fbecf
1f8a574
 
 
 
 
 
 
159dafc
1f8a574
 
b1dd777
1f8a574
9c38686
 
1f8a574
 
6f88984
f85ecdb
c337dee
f85ecdb
c337dee
1f8a574
1cb5e3b
1f8a574
 
 
 
 
 
 
 
 
 
 
 
 
ebe2468
8bc75ad
93dfcfd
1f8a574
 
 
 
 
 
159dafc
1f8a574
 
f805abd
1f8a574
 
 
 
 
 
 
 
 
 
 
 
 
 
2a8003d
78d74bc
 
aa97970
6c0e24b
1f8a574
6470f39
f0d3318
ce02612
 
1f8a574
 
9f6f81d
1f8a574
 
d2206f3
1f8a574
190623d
1f8a574
 
 
edeb74a
1f8a574
88fcd72
ce02612
1f8a574
 
098f75b
3518217
 
9333e0c
1f8a574
41b9138
82536c8
1f8a574
 
159dafc
1040cbc
 
1f8a574
 
 
05b8ae4
 
 
 
 
 
 
 
1f8a574
 
66675be
1f8a574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
license: mit
language:
- pt
tags:
- gervasio-pt*
- gervasio-ptpt
- gervasio-ptbr
- gervasio-7b-portuguese-ptpt-decoder
- gervasio-7b-portuguese-ptbr-decoder
- portulan
- albertina-pt*
- clm
- gpt
- portuguese
- decoder
- foundation model
datasets:
- PORTULAN/extraglue
- PORTULAN/extraglue-instruct
---
</br>
</br>
<img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png">
<p style="text-align: center;">&nbsp;&nbsp;&nbsp;&nbsp;This is the model card for Gervásio 7B PTBR Decoder. 
  You may be interested in some of the other models in the <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>.
</p>
</br>
</br>

# Gervásio 7B PTBR

</br>

**Gervásio PT*** is a **fully open** decoder for the **Portuguese language**. 


It is a **decoder** of the LLaMA family, based on the neural architecture Transformer and developed over the LLaMA-2 7B model.
Its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose ([extraGLUE-Instruct
](https://huggingface.co/datasets/PORTULAN/extraglue-instruct)).

It has different versions that were trained for different variants of Portuguese (PT), 
namely for the European variant, spoken in Portugal ([**gervasio-7b-portuguese-ptpt-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptpt-decoder)), and for the American variant, spoken in Brazil ([**gervasio-7b-portuguese-ptbr-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptbr-decoder)).

All versions of Gervásio are **openly distributed for free under an open license**, including thus for research and commercial purposes, and given its size, can 
be run on consumer-grade hardware.

**Gervásio 7B PTBR** is developed by NLX-Natural Language and Speech Group, at the University of Lisbon, Faculty of Sciences, Department of Informatics, Portugal.

For the record, its full name is **Gervásio Produz Textos em Português**, to which corresponds the natural acronym **GPT PT**, 
and which is known more shortly as **Gervásio PT*** or, even more briefly, just as **Gervásio**, among its acquaintances.

Gervásio 7B PTBR is developed by a team from the University of Lisbon, Portugal. 
For a fully detailed description, check the respective [publication](https://arxiv.org/abs/2402.18766):

``` latex
@misc{gervasio,
      title={Advancing Generative AI for Portuguese with
             Open Decoder Gervásio PT-*}, 
      author={Rodrigo Santos, João Silva, Luís Gomes,
              João Rodrigues, António Branco},
      year={2024},
      eprint={2402.18766},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

Please use the above cannonical reference when using or citing this model.


<br>


# Model Description

**This model card is for Gervásio 7B PTBR**, with 7 billion parameters, a hidden size of 4,096 units, an intermediate size of 11,008 units, 32 attention heads, 32 hidden layers, and a tokenizer obtained using the Byte-Pair Encoding (BPE) algorithm implemented with SentencePiece, featuring a vocabulary size of 32,000.

Gervásio 7B PTBR is distributed under an [MIT license](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptpt-decoder/blob/main/LICENSE).


<br>

# Training Data

**Gervásio 7B PTBR** was trained over standard supervised fine-tuning, and to keep some alignment with mainstream benchmarks for English, we resorted to tasks and respective datasets in the GLUE and the SuperGLUE collections.


We selected those datasets where the outcome of their machine translation into Portuguese could preserve, in the target language, the linguistic properties at stake.

From GLUE, we resorted to the following four tasks:
- MRPC (paraphrase Detection).
- RTE (recognizing Textual Entailment).
- STS-B (semantic textual similarity).
- WNLI (coreference and natural language inference).

And from SuperGLUE, we included these other four tasks: 
- BoolQ (yes/no question answering).
- CB (inference with 3 labels).
- COPA (reasoning)
- MultiRC (question answering).


These datasets were machine translated into American Portuguese and from the [extraGLUE](https://huggingface.co/datasets/PORTULAN/extraglue) dataset.


Furthermore, instruction templates have been manually crafted for each task.
These take the various fields in the dataset and arrange them into prompts, which were collected into the [extraGLUE-instruct](https://huggingface.co/datasets/PORTULAN/extraglue-instruct) dataset.

We also employed data augmentation techniques to enhance the size and diversity of our dataset.
This involved repurposing the tasks in various ways, such as generation of answers from MultiRC, question generation from BoolQ, and other relevant modifications.


# Training Details

We applied supervised fine-tuning with a causal language modeling training objective following a zero-out technique during the fine-tuning process.
Specifically, while the entire prompt received attention during fine-tuning, only the response tokens were subjected to back-propagation.

In terms of hyper-parameters, the model was trained with a learning rate of 2 * 10^-5, a weight decay of 0.1, a two-epoch training regime without warm-up, and to ensure the same number of tokens back-propagated per step, we employed an input sequence of 512 tokens with a batch size of 16 and 16 accumulation steps.

Due to hardware limitations that imposed a shorter sequence length (512) compared to the base model (4096), instead of the typical practice of concatenating all training examples and then dividing them into batches with the same input sequence length, we separated each example individually.
In other words, each example occupies the full input sequence length.


# Performance

For testing, we reserved the translated datasets MRPC (similarity) and RTE (inference), from GLUE, and COPA (reasoning/qa), from SuperGLUE, which were taken as representatives of three major types of tasks, and were not seen during training.

| Model                    | MRPC (F1)      | RTE (F1)       | COPA (F1) |
|--------------------------|----------------|----------------|-----------|
| **Gervásio 7B PTBR**     | **0.7822**     | **0.8321**     | 0.2134    | 
| **LLaMA-2 (English)**         | 0.0369         | 0.0516         | 0.4867    |
| **LLaMA-2 Chat (English)**    | 0.5432         | 0.3807         | **0.5493**|


For further testing our decoder, in addition to the testing data described above, we also used datasets that were originally developed with texts from Portuguese: ASSIN2 RTE (entailment) and ASSIN2 STS (similarity), BLUEX (question answering), ENEM 2022 (question answering) and FaQuAD (extractive question-answering).

| Model                    | ENEM 2022 (Accuracy) | BLUEX (Accuracy)| RTE (F1)  | STS (Pearson) |
|--------------------------|----------------------|-----------------|-----------|---------------|
| **Gervásio 7B PTBR**    | 0.1977               | 0.2640          | **0.7469**| **0.2136**    |
| **LLaMA-2 (English)**              | **0.2458**               | 0.2903          | 0.0913    | 0.1034        |
| **LLaMA-2 Chat (English)**         | 0.2231               | **0.2959**          | 0.5546    | 0.1750        |



In comparison with other decoder of the same dimension, namely Sabiá 1.5B, Gervásio shows a superior
or competitive performance for the tasks in PTBR, while being the sole encoder of 1.5B dimmension for the PTPT 
variant of Portuguese and thus the state of art
in this respect at the time of its publishing. For further evaluation data, 
see the respective [publication](https://arxiv.org/abs/2402.18766).

<br>

# How to use

You can use this model directly with a pipeline for causal language modeling:

```python3
>>> from transformers import pipeline
>>> generator = pipeline(model='PORTULAN/gervasio-7b-portuguese-ptbr-decoder')
>>> generator("A música brasileira é", max_new_tokens=10)


```
<br>

# Acknowledgments

The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
grant PINFRA/22117/2016; research project GPT-PT - Transformer-based Decoder for the Portuguese Language, funded by FCT—Fundação para a Ciência e Tecnologia under the
grant CPCA-IAC/AV/478395/2022; innovation project 
ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação 
under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, 
call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização.