rvlopes commited on
Commit
a95ffb6
1 Parent(s): a266c64

Uploaded model, tokenizer and readme.

Browse files
README.md CHANGED
@@ -1,3 +1,116 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - pt
4
+ tags:
5
+ - GlórIA
6
+ - European Portuguese
7
+ - gptneo
8
+ - decoder
9
+ - foundation model
10
+ - text-generation
11
+ datasets:
12
+ - europarl_bilingual
13
+ - assin2
14
+ - dlb/plue
15
+ - oscar-corpus/OSCAR-2301
16
+ - PORTULAN/glue-ptpt
17
+ widget:
18
+ - text: A culinária portuguesa é rica em aromas e
19
+ - text: Os computadores hoje em dia são muito
20
+ - text: A literatura Portuguesa é
21
+ inference:
22
+ parameters:
23
+ temperature: 1
24
+ repetition_penalty: 2
25
+ max_new_tokens: 30
26
+ num_beams: 4
27
+ do_sample: true
28
+ top_k: 50
29
+ library_name: transformers
30
  ---
31
+
32
+ # GlórIA 1.3B
33
+
34
+ ## Model Description
35
+ **GlórIA** is a large generative language model, with special **focus on European Portuguese**.
36
+
37
+ It is a 1.3B parameters model, based on [GPTNeo](https://huggingface.co/EleutherAI/gpt-neo-1.3B), which has 24 layers and a hidden size of 2048.
38
+
39
+ ## Training Data
40
+ **GlórIA 1.3B** was trained on a large corpora, with approximately 35B tokens. This corpus was built by gathering multiple Portuguese sources:
41
+ - [ArquivoPT News PT-PT Dataset](): A collection of 1.4M European Portuguese archived news and periodicals from [Arquivo.pt](https://arquivo.pt/).
42
+ - [ClueWeb-Large PT-PT](https://lemurproject.org/clueweb22.php/): Multilingual Corpus, similar to OSCAR. Metadata was used to filter only PT-PT webpages.
43
+ - [Europarl PT-PT](https://www.statmt.org/europarl/): A parallel corpus with documents such as transcripts from the European Parliament (we only used the PT-PT documents).
44
+ - [OpenSubtitles PT-PT](https://opus.nlpl.eu/OpenSubtitles.php): A corpus containing PT-PT subtitles from [OpenSubtitles](http://www.opensubtitles.org/).
45
+ - [OSCAR PT-PT](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201): Multilingual Corpus obtained from filtering the Common Crawl corpus. We used metadata to filter only PT-PT webpages.
46
+ - [PT WIKI](): The Portuguese Wikipedia. 2022/06/20 Dump.
47
+
48
+ <br>
49
+
50
+ ## Evaluation - CALAME-PT
51
+ GlórIA 1.3B generative capabilities were evaluated on **CALAME-PT** - a new Portuguese benchmark with the goal of predicting the last word of a sentence, according to a given context.
52
+
53
+ | Model and Size | Exact-Match |
54
+ | ---------------- | ---------- |
55
+ | Gervasio-PTPT (1B) | 44.01 |
56
+ | mGPT (1.3B) | 47.14 |
57
+ | GlórIA (1.3B) | 52.79 |
58
+
59
+
60
+ <br>
61
+
62
+ # How to use
63
+ ## Basic Inference Example
64
+ ```py
65
+ >>> from transformers import pipeline
66
+ >>> generator = pipeline('text-generation', model='NOVA-vision-language/GlorIA-1.3B-original')
67
+ >>> generator("A culinária portuguesa é rica em aromas e", do_sample=True, min_length=50)
68
+ [{'generated_text': 'A culinária portuguesa é rica em aromas e'}]
69
+ ```
70
+ ## Recommended Parameters and Usage (for more flexibility)
71
+ ```py
72
+ from transformers import GenerationConfig, TextGenerationPipeline
73
+ generation_config = GenerationConfig(
74
+ max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id,
75
+ no_repeat_ngram_size=0, num_beams=4, repetition_penalty=2.0, temperature=1.0,
76
+ output_scores=True, early_stopping=True
77
+ )
78
+ generator = TextGenerationPipeline(model=model, task="text-generation",
79
+ tokenizer=loaded_tokenizer, device=0)
80
+ completion_prompts = ["Fernando Pessoa foi um dos poetas mais relevantes de"]
81
+ out = generator(completion_prompts, generation_config=generation_config)
82
+ [[{'generated_text': 'Fernando Pessoa foi um dos poetas mais relevantes de toda a literatura portuguesa, autor de uma obra que se estende por mais de quatro dezenas de livros, entre os quais "Mensagem", "O Guardador de Rebanhos", "Livro do desassossego", "Odes",'}]]
83
+ ```
84
+
85
+ <br>
86
+
87
+
88
+ # Citation
89
+
90
+
91
+ Please use the following BibTeX to cite our paper:
92
+ ```
93
+ @InProceedings{gloria_ptpt_propor2024,
94
+ author="Lopes, Ricardo
95
+ and Magalhães, João
96
+ and Semedo, David",
97
+ title="GlórIA: A Generative and Open Large Language Model for Portuguese",
98
+ booktitle="Computational Processing of the Portuguese Language (PROPOR 2024)",
99
+ year="2024",
100
+ }
101
+ ```
102
+
103
+ **License**: GlórIA's usage is restricted to research-only purposes, subject to the ClueWeb22 Dataset license, which can be freely obtained [here](https://www.lemurproject.org/clueweb22/obtain.php).
104
+
105
+
106
+ # Acknowledgements
107
+
108
+ We would like to thank Arquivo.pt's team for their content preservation efforts, and for all the help and guidance in accessing the archived web pages at scale.
109
+ This work was partially funded by the FCT project NOVA LINCS Ref. UIDP/04516/2020, by CMU|Portugal project iFetch, Ref. CMUP LISBOA-01-0247-FEDER-045920, and by the FCT project Ref. Nº CPCA-IAC/AV/594875/2023.
110
+
111
+ <br>
112
+
113
+
114
+
115
+
116
+
config.json ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "D:\\FCT\\TESE\\ThesisTrainer\\model_hub",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPTNeoForCausalLM"
6
+ ],
7
+ "attention_dropout": 0,
8
+ "attention_layers": [
9
+ "global",
10
+ "local",
11
+ "global",
12
+ "local",
13
+ "global",
14
+ "local",
15
+ "global",
16
+ "local",
17
+ "global",
18
+ "local",
19
+ "global",
20
+ "local",
21
+ "global",
22
+ "local",
23
+ "global",
24
+ "local",
25
+ "global",
26
+ "local",
27
+ "global",
28
+ "local",
29
+ "global",
30
+ "local",
31
+ "global",
32
+ "local"
33
+ ],
34
+ "attention_types": [
35
+ [
36
+ [
37
+ "global",
38
+ "local"
39
+ ],
40
+ 12
41
+ ]
42
+ ],
43
+ "bos_token_id": 50256,
44
+ "classifier_dropout": 0.1,
45
+ "do_sample": true,
46
+ "embed_dropout": 0,
47
+ "eos_token_id": 50256,
48
+ "gradient_checkpointing": false,
49
+ "hidden_size": 2048,
50
+ "initializer_range": 0.02,
51
+ "intermediate_size": null,
52
+ "layer_norm_epsilon": 1e-05,
53
+ "max_length": 50,
54
+ "max_position_embeddings": 2048,
55
+ "model_type": "gpt_neo",
56
+ "num_heads": 16,
57
+ "num_layers": 24,
58
+ "resid_dropout": 0,
59
+ "summary_activation": null,
60
+ "summary_first_dropout": 0.1,
61
+ "summary_proj_to_labels": true,
62
+ "summary_type": "cls_index",
63
+ "summary_use_proj": true,
64
+ "task_specific_params": {
65
+ "text-generation": {
66
+ "do_sample": true,
67
+ "max_length": 50,
68
+ "temperature": 0.9
69
+ }
70
+ },
71
+ "temperature": 0.9,
72
+ "tokenizer_class": "GPT2Tokenizer",
73
+ "torch_dtype": "float32",
74
+ "transformers_version": "4.29.1",
75
+ "use_cache": true,
76
+ "vocab_size": 50258,
77
+ "window_size": 256
78
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "do_sample": true,
5
+ "eos_token_id": 50256,
6
+ "max_length": 50,
7
+ "temperature": 0.9,
8
+ "transformers_version": "4.29.1"
9
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ff171448458ddb881d3095cc771f6d935004d12f807c304e5905f46a19fe492
3
+ size 5363015080
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d067b12f2fe852c21aab4e4cef51ef7ddee1ff47896385dc077c9b1c784c2a93
3
+ size 5363096533
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "tokenizer_class": "GPT2Tokenizer",
6
+ "unk_token": "<|endoftext|>"
7
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff