File size: 5,282 Bytes
bcb703b
 
fc8ef3d
 
 
 
6dba82e
fc8ef3d
ca10a26
fc8ef3d
 
 
 
 
 
 
 
 
 
 
ca10a26
 
 
 
 
 
 
 
 
 
fc8ef3d
 
a875d8c
6a7afc7
a875d8c
 
fc8ef3d
a875d8c
6dba82e
bcb703b
fc8ef3d
 
 
 
3f02a48
 
fc8ef3d
 
 
 
9bb929e
f746a0e
fc8ef3d
 
 
 
 
6c5d3de
 
fc8ef3d
 
 
6c5d3de
 
 
 
 
fc8ef3d
9d52e76
fc8ef3d
 
 
6c5d3de
fc8ef3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be48d84
fc8ef3d
be48d84
fc8ef3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6dba82e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
license: apache-2.0
datasets:
- Dahoas/synthetic-instruct-gptj-pairwise
- databricks/databricks-dolly-15k
- HuggingFaceH4/instruction-dataset
- nicholasKluge/instruct-aira-dataset
language:
- pt
metrics:
- bleu
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: <|startoftext|>Olá! Qual o seu nome?<|endoftext|>
  example_title: Olá
- text: >-
    <|startoftext|>Você pode me explicar o que é aprendizagem de
    máquina?<|endoftext|>
  example_title: Aprendizagem de máquina
- text: <|startoftext|>Você sabe alguma coisa sobre ética das virtudes<|endoftext|>
  example_title: Ética das virtudes
- text: <|startoftext|>O que posso fazer para alegrar minha namorada?<|endoftext|>
  example_title: Conselho
inference:
  parameters:
    repetition_penalty: 1.2
    temperature: 0.2
    top_k: 30
    top_p: 0.3
    max_length: 200
    length_penalty: 0.3
    early_stopping: true
---
# Aira-Instruct-PT-124M (Portuguese)

`Aira-Instruct-PT-124M` is a instruction-tuned GPT-style model based on [GPT-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese). The model was trained with a dataset composed of `prompt`, `completions`, generated via the [Self-Instruct](https://github.com/yizhongw/self-instruct) framework. `Aira-Instruct-PT-124M` instruction-tuning was achieved via conditional text generation.

The dataset used to train this model combines the following sources of data: the [`synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) dataset, the [`databricks_dolly_15k`](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset, the [`instruction-dataset`](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) dataset, and a subset of [Aira's](https://github.com/Nkluge-correa/Aira-EXPERT) fine-tuning dataset, focused on Q&A related to Ethics, AI, AI safety, and other related topics. The dataset is available in both Portuguese and English.

Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).

## Details

- **Size:** 124,441,344 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** Portuguese
- **Number of Epochs:** 5
- **Batch size:** 32
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.0009 KgCO2 (Canada)
- **Total Energy Consumption:** 0.41 kWh

| Epoch/Loss|Training|Validation|
|---|---|---|
| 1 |0.947100|0.774946|
| 2 |0.737357|0.730962|
| 3 |0.657410|0.710232|
| 4 |0.597437|0.705064|
| 5 |0.551684|0.704830|

This repository has the notebook used to train this model.

## Usage

Two special tokens are used to mark the user side of the interaction and the model's response:

`<|startoftext|>`What is a language model?`<|endoftext|>`A language model is a probability distribution over a vocabulary.`<|endoftext|>`

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-Instruct-PT-124M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-Instruct-PT-124M')

aira.eval()
aira.to(device)

question =  input("Enter your question: ")

inputs = tokenizer(tokenizer.bos_token + question + tokenizer.eos_token, return_tensors="pt").to(device)

responses = aira.generate(**inputs,
	bos_token_id=tokenizer.bos_token_id,
	pad_token_id=tokenizer.pad_token_id,
	eos_token_id=tokenizer.eos_token_id,
	do_sample=True,
	top_k=50,
	max_length=200,
	top_p=0.95,
	temperature=0.7,
	num_return_sequences=2)

print(f"Question: 👤 {question}\n")

for i, response in  enumerate(responses):
	print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```

The model will output something like:

```markdown
>>> Question: 👤 Olá! Como você se chama?

>>>Response 1: 🤖 Olá! Meu nome é Aira e sou um chatbot projetado para conversar sobre Ética e Segurança da IA. Se você precisar de ajuda com um assunto diferente, por favor, peça "ajuda".
>>>Response 2: 🤖 Olá! Meu nome é Aira e sou um chatbot treinado para responder perguntas sobre Ética e Segurança da IA. Se você precisar de ajuda para navegar em nossa conversa, não hesite em pedir ajuda.
```

## Limitations

🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.

🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.

## Cite as 🤗

```latex

@misc{nicholas22aira,
  doi = {10.5281/zenodo.6989727},
  url = {https://huggingface.co/nicholasKluge/Aira-Instruct-PT-124M},
  author = {Nicholas Kluge Corrêa and Carolina Del Pino},
  title = {Aira},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
}

```

## License

The `Aira-Instruct-PT-124M` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.