RichardErkhov commited on
Commit
c347089
·
verified ·
1 Parent(s): b9ec02b

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +203 -0
README.md ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ gervasio-7b-portuguese-ptpt-decoder - GGUF
11
+ - Model creator: https://huggingface.co/PORTULAN/
12
+ - Original model: https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptpt-decoder/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [gervasio-7b-portuguese-ptpt-decoder.Q2_K.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [gervasio-7b-portuguese-ptpt-decoder.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [gervasio-7b-portuguese-ptpt-decoder.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [gervasio-7b-portuguese-ptpt-decoder.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [gervasio-7b-portuguese-ptpt-decoder.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [gervasio-7b-portuguese-ptpt-decoder.Q3_K.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [gervasio-7b-portuguese-ptpt-decoder.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [gervasio-7b-portuguese-ptpt-decoder.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [gervasio-7b-portuguese-ptpt-decoder.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [gervasio-7b-portuguese-ptpt-decoder.Q4_0.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [gervasio-7b-portuguese-ptpt-decoder.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [gervasio-7b-portuguese-ptpt-decoder.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [gervasio-7b-portuguese-ptpt-decoder.Q4_K.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [gervasio-7b-portuguese-ptpt-decoder.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [gervasio-7b-portuguese-ptpt-decoder.Q4_1.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [gervasio-7b-portuguese-ptpt-decoder.Q5_0.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [gervasio-7b-portuguese-ptpt-decoder.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [gervasio-7b-portuguese-ptpt-decoder.Q5_K.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [gervasio-7b-portuguese-ptpt-decoder.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [gervasio-7b-portuguese-ptpt-decoder.Q5_1.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [gervasio-7b-portuguese-ptpt-decoder.Q6_K.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [gervasio-7b-portuguese-ptpt-decoder.Q8_0.gguf](https://huggingface.co/RichardErkhov/PORTULAN_-_gervasio-7b-portuguese-ptpt-decoder-gguf/blob/main/gervasio-7b-portuguese-ptpt-decoder.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: mit
46
+ language:
47
+ - pt
48
+ tags:
49
+ - gervasio-pt*
50
+ - gervasio-ptpt
51
+ - gervasio-ptbr
52
+ - gervasio-7b-portuguese-ptpt-decoder
53
+ - gervasio-7b-portuguese-ptbr-decoder
54
+ - portulan
55
+ - albertina-pt*
56
+ - clm
57
+ - gpt
58
+ - portuguese
59
+ - decoder
60
+ - foundation model
61
+ datasets:
62
+ - PORTULAN/extraglue
63
+ - PORTULAN/extraglue-instruct
64
+ ---
65
+ </br>
66
+ </br>
67
+ <img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png">
68
+ <p style="text-align: center;">&nbsp;&nbsp;&nbsp;&nbsp;This is the model card for Gervásio 7B PTPT Decoder.
69
+ You may be interested in some of the other models in the <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>.
70
+ </p>
71
+ </br>
72
+ </br>
73
+
74
+ # Gervásio 7B PTPT
75
+
76
+ </br>
77
+
78
+ **Gervásio PT*** is a **fully open** decoder for the **Portuguese language**.
79
+
80
+
81
+ It is a **decoder** of the LLaMA family, based on the neural architecture Transformer and developed over the LLaMA-2 7B model.
82
+ Its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose ([extraGLUE-Instruct
83
+ ](https://huggingface.co/datasets/PORTULAN/extraglue-instruct)).
84
+
85
+ It has different versions that were trained for different variants of Portuguese (PT),
86
+ namely for the European variant, spoken in Portugal ([**gervasio-7b-portuguese-ptpt-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptpt-decoder)), and for the American variant, spoken in Brazil ([**gervasio-7b-portuguese-ptbr-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptbr-decoder)).
87
+
88
+ All versions of Gervásio are **openly distributed for free under an open license**, including thus for research and commercial purposes, and given its size, can
89
+ be run on consumer-grade hardware.
90
+
91
+ **Gervásio 7B PTPT** is developed by NLX-Natural Language and Speech Group, at the University of Lisbon, Faculty of Sciences, Department of Informatics, Portugal.
92
+
93
+ For the record, its full name is **Gervásio Produz Textos em Português**, to which corresponds the natural acronym **GPT PT**,
94
+ and which is known more shortly as **Gervásio PT*** or, even more briefly, just as **Gervásio**, among its acquaintances.
95
+
96
+ Gervásio 7B PTPT is developed by a team from the University of Lisbon, Portugal.
97
+ For a fully detailed description, check the respective [publication](https://arxiv.org/abs/2402.18766):
98
+
99
+ ``` latex
100
+ @misc{gervasio,
101
+ title={Advancing Generative AI for Portuguese with
102
+ Open Decoder Gervásio PT-*},
103
+ author={Rodrigo Santos, João Silva, Luís Gomes,
104
+ João Rodrigues, António Branco},
105
+ year={2024},
106
+ eprint={2402.18766},
107
+ archivePrefix={arXiv},
108
+ primaryClass={cs.CL}
109
+ }
110
+ ```
111
+
112
+ Please use the above cannonical reference when using or citing this model.
113
+
114
+
115
+ <br>
116
+
117
+
118
+ # Model Description
119
+
120
+ **This model card is for Gervásio 7B PTPT**, with 7 billion parameters, a hidden size of 4,096 units, an intermediate size of 11,008 units, 32 attention heads, 32 hidden layers, and a tokenizer obtained using the Byte-Pair Encoding (BPE) algorithm implemented with SentencePiece, featuring a vocabulary size of 32,000.
121
+
122
+ Gervásio 7B PTPT is distributed under an [MIT license](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptbr-decoder/blob/main/LICENSE).
123
+
124
+
125
+ <br>
126
+
127
+ # Training Data
128
+
129
+ **Gervásio 7B PTPT** was trained over standard supervised fine-tuning, and to keep some alignment with mainstream benchmarks for English, we resorted to tasks and respective datasets in the GLUE and the SuperGLUE collections.
130
+
131
+
132
+ We selected those datasets where the outcome of their machine translation into European Portuguese could preserve, in the target language, the linguistic properties at stake.
133
+
134
+ From GLUE, we resorted to the following four tasks:
135
+ - MRPC (paraphrase Detection).
136
+ - RTE (recognizing Textual Entailment).
137
+ - STS-B (semantic textual similarity).
138
+ - WNLI (coreference and natural language inference).
139
+
140
+ And from SuperGLUE, we included these other four tasks:
141
+ - BoolQ (yes/no question answering).
142
+ - CB (inference with 3 labels).
143
+ - COPA (reasoning)
144
+ - MultiRC (question answering).
145
+
146
+
147
+
148
+ These datasets were machine translated into European Portuguese and from the [extraGLUE](https://huggingface.co/datasets/PORTULAN/extraglue) dataset.
149
+
150
+
151
+ Furthermore, instruction templates have been manually crafted for each task.
152
+ These take the various fields in the dataset and arrange them into prompts, which were collected into the [extraGLUE-instruct](https://huggingface.co/datasets/PORTULAN/extraglue-instruct) dataset.
153
+
154
+ We also employed data augmentation techniques to enhance the size and diversity of our dataset.
155
+ This involved repurposing the tasks in various ways, such as generation of answers from MultiRC, question generation from BoolQ, and other relevant modifications.
156
+
157
+
158
+ # Training Details
159
+
160
+ We applied supervised fine-tuning with a causal language modeling training objective following a zero-out technique during the fine-tuning process.
161
+ Specifically, while the entire prompt received attention during fine-tuning, only the response tokens were subjected to back-propagation.
162
+
163
+ In terms of hyper-parameters, the model was trained with a learning rate of 2 * 10^-5, a weight decay of 0.1, a two-epoch training regime without warm-up, and to ensure the same number of tokens back-propagated per step, we employed an input sequence of 512 tokens with a batch size of 16 and 16 accumulation steps.
164
+
165
+ Due to hardware limitations that imposed a shorter sequence length (512) compared to the base model (4096), instead of the typical practice of concatenating all training examples and then dividing them into batches with the same input sequence length, we separated each example individually.
166
+ In other words, each example occupies the full input sequence length.
167
+
168
+
169
+
170
+ # Performance
171
+
172
+ For testing, we reserved the translated datasets MRPC (similarity) and RTE (inference), from GLUE, and COPA (reasoning/qa), from SuperGLUE, which were taking as representatives of three major types of tasks, and were not seen during training.
173
+
174
+
175
+ | Model | MRPC (F1) | RTE (F1) | COPA (F1) |
176
+ |--------------------------|----------------|----------------|-----------|
177
+ | **Gervásio 7B PTPT** | **0.7273** | **0.8291** | **0.5459**|
178
+ | **LLaMA-2 (English)** | 0.0328 | 0.0482 | 0.3844 |
179
+ | **LLaMA-2 Chat (English)** | 0.5703 | 0.4697 | 0.4737 |
180
+ <br>
181
+
182
+ # How to use
183
+
184
+ You can use this model directly with a pipeline for causal language modeling:
185
+
186
+ ```python3
187
+ >>> from transformers import pipeline
188
+ >>> generator = pipeline(model='PORTULAN/gervasio-7b-portuguese-ptpt-decoder')
189
+ >>> generator("A comida portuguesa é", max_new_tokens=10)
190
+
191
+ ```
192
+ <br>
193
+
194
+ # Acknowledgments
195
+
196
+ The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
197
+ funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
198
+ grant PINFRA/22117/2016; research project GPT-PT - Transformer-based Decoder for the Portuguese Language, funded by FCT—Fundação para a Ciência e Tecnologia under the
199
+ grant CPCA-IAC/AV/478395/2022; innovation project
200
+ ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação
201
+ under the grant C625734525-00462629, of Plano de Recuperação e Resiliência,
202
+ call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização.
203
+