jarodrigues commited on
Commit
e7db4dd
1 Parent(s): 8bb6f30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -29
README.md CHANGED
@@ -35,10 +35,6 @@ It has different versions that were trained for different variants of Portuguese
35
  namely the European variant from Portugal (**PT-PT**) and the American variant from Brazil (**PT-BR**),
36
  and it is distributed free of charge and under a most permissible license.
37
 
38
- **Albertina PT-PT** is the version for European **Portuguese** from **Portugal**,
39
- and to the best of our knowledge, at the time of its initial distribution,
40
- it is the first competitive encoder specifically for this language and variant
41
- that is made publicly available and distributed for reuse.
42
 
43
  It is developed by a joint team from the University of Lisbon and the University of Porto, Portugal.
44
  For further details, check the respective [publication](https://arxiv.org/abs/2305.06721):
@@ -75,20 +71,15 @@ DeBERTa is distributed under an [MIT license](https://github.com/microsoft/DeBER
75
 
76
  # Training Data
77
 
78
- [**Albertina PT-PT Base**](https://huggingface.co/PORTULAN/albertina-ptpt-base) was trained over a 2.2 billion token data set that resulted from gathering some openly available corpora of European Portuguese from the following sources:
79
 
80
- - [OSCAR](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301): the OSCAR data set includes documents in more than one hundred languages, including Portuguese, and it is widely used in the literature. It is the result of a selection performed over the [Common Crawl](https://commoncrawl.org/) data set, crawled from the Web, that retains only pages whose metadata indicates permission to be crawled, that performs deduplication, and that removes some boilerplate, among other filters. Given that it does not discriminate between the Portuguese variants, we performed extra filtering by retaining only documents whose meta-data indicate the Internet country code top-level domain of Portugal. We used the January 2023 version of OSCAR, which is based on the November/December 2022 version of Common Crawl.
81
- - [DCEP](https://joint-research-centre.ec.europa.eu/language-technology-resources/dcep-digital-corpus-european-parliament_en): the Digital Corpus of the European Parliament is a multilingual corpus including documents in all official EU languages published on the European Parliament's official website. We retained its European Portuguese portion.
82
- - [Europarl](https://www.statmt.org/europarl/): the European Parliament Proceedings Parallel Corpus is extracted from the proceedings of the European Parliament from 1996 to 2011. We retained its European Portuguese portion.
83
- - [ParlamentoPT](https://huggingface.co/datasets/PORTULAN/parlamento-pt): the ParlamentoPT is a data set we obtained by gathering the publicly available documents with the transcription of the debates in the Portuguese Parliament.
84
-
85
-
86
- [**Albertina PT-BR Base**](https://huggingface.co/PORTULAN/albertina-ptbr-base), in turn, was trained over a 3.7 billion token curated selection of documents from the [OSCAR](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301) data set, specifically filtered by the Internet country code top-level domain of Brazil.
87
 
88
 
89
  ## Preprocessing
90
 
91
- We filtered the PT-PT and PT-BR corpora using the [BLOOM pre-processing](https://github.com/bigscience-workshop/data-preparation) pipeline.
92
  We skipped the default filtering of stopwords since it would disrupt the syntactic structure, and also the filtering for language identification given the corpus was pre-selected as Portuguese.
93
 
94
 
@@ -96,13 +87,10 @@ We skipped the default filtering of stopwords since it would disrupt the syntact
96
 
97
  As codebase, we resorted to the [DeBERTa V1 Base](https://huggingface.co/microsoft/deberta-base), for English.
98
 
99
- To train [**Albertina PT-PT Base**](https://huggingface.co/PORTULAN/albertina-ptpt-base), the data set was tokenized with the original DeBERTa tokenizer with a 128 token sequence truncation and dynamic padding.
 
100
  The model was trained using the maximum available memory capacity resulting in a batch size of 3072 samples (192 samples per GPU).
101
  We opted for a learning rate of 1e-5 with linear decay and 10k warm-up steps.
102
- A total of 200 training epochs were performed resulting in approximately 180k steps.
103
- The model was trained for one day on a2-megagpu-16gb Google Cloud A2 VMs with 16 GPUs, 96 vCPUs and 1.360 GB of RAM.
104
-
105
- To train [**Albertina PT-BR Base**](https://huggingface.co/PORTULAN/albertina-ptpt-base) we followed the same hyperparameterization as the Albertina PT-PT Base model.
106
  The model was trained with a total of 150 training epochs resulting in approximately 180k steps.
107
  The model was trained for one day on a2-megagpu-16gb Google Cloud A2 VMs with 16 GPUs, 96 vCPUs and 1.360 GB of RAM.
108
 
@@ -111,7 +99,7 @@ The model was trained for one day on a2-megagpu-16gb Google Cloud A2 VMs with 16
111
 
112
  # Evaluation
113
 
114
- The two base model versions were evaluated on downstream tasks, namely the translations into PT-BR and PT-PT of the English data sets used for a few of the tasks in the widely-used [GLUE benchmark](https://huggingface.co/datasets/glue), which allowed us to test both Albertina-PT-* Base variants on a wider variety of downstream tasks.
115
 
116
 
117
  ## GLUE tasks translated
@@ -124,18 +112,9 @@ We address four tasks from those in PLUE, namely:
124
 
125
  | Model | RTE (Accuracy) | WNLI (Accuracy)| MRPC (F1) | STS-B (Pearson) |
126
  |--------------------------|----------------|----------------|-----------|-----------------|
127
- | **Albertina-PT-BR Base** | 0.6462 | **0.5493** | 0.8779 | 0.8501 |
128
- | **Albertina-PT-PT Base** | **0.6643** | 0.4366 | **0.8966** | **0.8608** |
129
 
130
 
131
- We resorted to [GLUE-PT](https://huggingface.co/datasets/PORTULAN/glue-ptpt), a **PT-PT version of the GLUE** benchmark.
132
- We automatically translated the same four tasks from GLUE using [DeepL Translate](https://www.deepl.com/), which specifically provides translation from English to PT-PT as an option.
133
-
134
- | Model | RTE (Accuracy) | WNLI (Accuracy)| MRPC (F1) | STS-B (Pearson) |
135
- |--------------------------|----------------|----------------|-----------|-----------------|
136
- | **Albertina-PT-PT Base** | **0.6787** | 0.4507 | 0.8829 | **0.8581** |
137
- | **Albertina-PT-BR Base** | 0.6570 | **0.5070** | **0.8900** | 0.8516 |
138
-
139
  <br>
140
 
141
  # How to use
 
35
  namely the European variant from Portugal (**PT-PT**) and the American variant from Brazil (**PT-BR**),
36
  and it is distributed free of charge and under a most permissible license.
37
 
 
 
 
 
38
 
39
  It is developed by a joint team from the University of Lisbon and the University of Porto, Portugal.
40
  For further details, check the respective [publication](https://arxiv.org/abs/2305.06721):
 
71
 
72
  # Training Data
73
 
 
74
 
75
+ [**Albertina PT-BR Base**](https://huggingface.co/PORTULAN/albertina-ptbr-base) was trained over a 3.7 billion token curated selection of documents from the [OSCAR](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301) data set, specifically filtered by the Internet country code top-level domain of Brazil.
76
+ The OSCAR data set includes documents in more than one hundred languages, including Portuguese, and it is widely used in the literature. It is the result of a selection performed over the [Common Crawl](https://commoncrawl.org/) data set, crawled from the Web, that retains only pages whose metadata indicates permission to be crawled, that performs deduplication, and that removes some boilerplate, among other filters.
77
+ Given that it does not discriminate between the Portuguese variants, we performed extra filtering by retaining only documents whose meta-data indicate the Internet country code top-level domain of Brazil. We used the January 2023 version of OSCAR, which is based on the November/December 2022 version of Common Crawl.
 
 
 
 
78
 
79
 
80
  ## Preprocessing
81
 
82
+ We filtered the PT-BR corpora using the [BLOOM pre-processing](https://github.com/bigscience-workshop/data-preparation) pipeline.
83
  We skipped the default filtering of stopwords since it would disrupt the syntactic structure, and also the filtering for language identification given the corpus was pre-selected as Portuguese.
84
 
85
 
 
87
 
88
  As codebase, we resorted to the [DeBERTa V1 Base](https://huggingface.co/microsoft/deberta-base), for English.
89
 
90
+
91
+ To train [**Albertina PT-BR Base**](https://huggingface.co/PORTULAN/albertina-ptpt-base), the data set was tokenized with the original DeBERTa tokenizer with a 128 token sequence truncation and dynamic padding.
92
  The model was trained using the maximum available memory capacity resulting in a batch size of 3072 samples (192 samples per GPU).
93
  We opted for a learning rate of 1e-5 with linear decay and 10k warm-up steps.
 
 
 
 
94
  The model was trained with a total of 150 training epochs resulting in approximately 180k steps.
95
  The model was trained for one day on a2-megagpu-16gb Google Cloud A2 VMs with 16 GPUs, 96 vCPUs and 1.360 GB of RAM.
96
 
 
99
 
100
  # Evaluation
101
 
102
+ The base model versions was evaluated on downstream tasks, namely the translations into PT-BR of the English data sets used for a few of the tasks in the widely-used [GLUE benchmark](https://huggingface.co/datasets/glue).
103
 
104
 
105
  ## GLUE tasks translated
 
112
 
113
  | Model | RTE (Accuracy) | WNLI (Accuracy)| MRPC (F1) | STS-B (Pearson) |
114
  |--------------------------|----------------|----------------|-----------|-----------------|
115
+ | **Albertina-PT-BR Base** | 0.6462 | 0.5493 | 0.8779 | 0.8501 |
 
116
 
117
 
 
 
 
 
 
 
 
 
118
  <br>
119
 
120
  # How to use