Datasets:
jarodrigues
commited on
Commit
•
a062ec7
1
Parent(s):
1a503ea
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,73 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language:
|
5 |
+
- pt
|
6 |
+
license:
|
7 |
+
- other
|
8 |
+
multilinguality:
|
9 |
+
- monolingual
|
10 |
+
pretty_name: ParlamentoPT
|
11 |
+
size_categories:
|
12 |
+
- 1M<n<10M
|
13 |
+
source_datasets:
|
14 |
+
- original
|
15 |
+
task_categories:
|
16 |
+
- text-generation
|
17 |
+
- fill-mask
|
18 |
+
task_ids:
|
19 |
+
- language-modeling
|
20 |
+
- masked-language-modeling
|
21 |
+
tags:
|
22 |
+
- parlamentopt
|
23 |
+
- parlamento
|
24 |
+
- albertina-pt*
|
25 |
+
- albertina-ptpt
|
26 |
+
- albertina-ptbr
|
27 |
+
- fill-mask
|
28 |
+
- bert
|
29 |
+
- deberta
|
30 |
+
- portuguese
|
31 |
+
- encoder
|
32 |
+
- foundation model
|
33 |
---
|
34 |
+
|
35 |
+
# Dataset Card for ParlamentoPT
|
36 |
+
|
37 |
+
### Dataset Summary
|
38 |
+
|
39 |
+
The ParlamentoPT data set was obtained by collecting publicly available documents containing transcriptions of debates in the Portuguese Parliament.
|
40 |
+
The data was collected from the Portuguese Parliament portal in accordance with its [open data policy](https://www.parlamento.pt/Cidadania/Paginas/DadosAbertos.aspx).
|
41 |
+
|
42 |
+
|
43 |
+
This dataset was collected with the purpose of creating the [Albertina-PT*](https://huggingface.co/PORTULAN/albertina-ptpt) language model, and it serves as training data for model development.
|
44 |
+
The development of the model is a collaborative effort between the University of Lisbon and the University of Porto in Portugal
|
45 |
+
|
46 |
+
</br>
|
47 |
+
|
48 |
+
# Citation
|
49 |
+
|
50 |
+
When using or citing this data set, kindly cite the following publication:
|
51 |
+
|
52 |
+
``` latex
|
53 |
+
@misc{albertina-pt,
|
54 |
+
title={Advancing Neural Encoding of Portuguese
|
55 |
+
with Transformer Albertina PT-*},
|
56 |
+
author={João Rodrigues and Luís Gomes and João Silva and
|
57 |
+
António Branco and Rodrigo Santos and
|
58 |
+
Henrique Lopes Cardoso and Tomás Osório},
|
59 |
+
year={2023},
|
60 |
+
eprint={?},
|
61 |
+
archivePrefix={arXiv},
|
62 |
+
primaryClass={cs.CL}
|
63 |
+
}
|
64 |
+
```
|
65 |
+
|
66 |
+
<br>
|
67 |
+
|
68 |
+
# Acknowledgments
|
69 |
+
|
70 |
+
The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
|
71 |
+
funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
|
72 |
+
grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the
|
73 |
+
grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020.
|