File size: 4,281 Bytes
aacd552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5257da
aacd552
 
 
 
 
 
 
 
 
 
 
5564b2e
aacd552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bdd4efb
aacd552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a511310
 
 
aacd552
f8fd6ee
aacd552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0650095
 
aacd552
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: bigscience-bloom-rail-1.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- pt
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Me explique o que é Aprendizagem de Máquina?<|endofinstruction|>"
  example_title: Aprendizagem de Máquina
- text: "<|startofinstruction|>Você sabe alguma coisa sobre a Ética das Virtudes?<|endofinstruction|>"
  example_title: Ética
- text: "<|startofinstruction|>Como eu posso fazer a minha namorada feliz?<|endofinstruction|>"
  example_title: Conselho
inference:
  parameters:
    repetition_penalty: 1.2
    temperature: 0.2
    top_k: 30
    top_p: 0.3
    max_new_tokens: 100
    length_penalty: 0.3
    early_stopping: true
co2_eq_emissions:
  emissions: 0.80
  source: CodeCarbon
  training_type: fine-tuning
  geographical_location: Singapore
  hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-portuguese-560M

`Aira-2` is the second version of the Aira instruction-tuned series. `Aira-2-portuguese-560M` is an instruction-tuned model based on [BLOOM](https://huggingface.co/bigscience/bloom-560m). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).

Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo-Portuguese).

## Details

- **Size:** 559,012,864 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** Portuguese
- **Number of Epochs:** 3
- **Batch size:** 8
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.80 KgCO2 (Singapore)
- **Total Energy Consumption:** 1.64 kWh

This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.

## Usage

Three special tokens are used to mark the user side of the interaction and the model's response:

`<|startofinstruction|>`O que é um modelo de linguagem?`<|endofinstruction|>`Um modelo de linguagem é uma distribuição de probabilidade sobre um vocabulário.`<|endofcompletion|>`

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = torch.device("cuda"  if torch.cuda.is_available() else  "cpu")

tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-portuguese-560M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-portuguese-560M')

aira.eval()
aira.to(device)

question =  input("Enter your question: ")

inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
  add_special_tokens=False,
  return_tensors="pt").to(device)

responses = aira.generate(**inputs,	num_return_sequences=2)

print(f"Question: 👤 {question}\n")

for i, response in  enumerate(responses):
	print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```

The model will output something like:

```markdown
>>> Question: 👤 Qual a capital da Alemanha?

>>>Response 1: 🤖 A capital da Alemanha é Berlim. É a maior cidade da Alemanha e serve como centro administrativo, cultural e político da Alemanha.
>>>Response 2: 🤖 A capital da Alemanha é Berlim. É a maior cidade da Alemanha e serve como centro administrativo, cultural e político da Alemanha.
```

## Limitations

🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.

🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.

## Cite as 🤗

```latex

@misc{nicholas22aira,
  doi = {10.5281/zenodo.6989727},
  url = {https://huggingface.co/nicholasKluge/Aira-2-portuguese-560M},
  author = {Nicholas Kluge Corrêa},
  title = {Aira},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
}

```

## License

The `Aira-2-portuguese-560M` is licensed under the RAIL License since it is a model derived from BLOOM. See the [LICENSE](LICENSE) file for more details.