File size: 5,180 Bytes
be78f17
 
285f749
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be78f17
285f749
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
license: bigscience-bloom-rail-1.0
datasets:
- nicholasKluge/fine-tuning-instruct-aira
- Dahoas/synthetic-instruct-gptj-pairwise
- databricks/databricks-dolly-15k
- HuggingFaceH4/instruction-dataset
language:
- pt
metrics:
- bleu
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: <|startoftext|>Olá! Qual o seu nome?<|endoftext|>
  example_title: Olá
- text: >-
    <|startoftext|>Você pode me explicar o que é aprendizagem de
    máquina?<|endoftext|>
  example_title: Aprendizagem de máquina
- text: <|startoftext|>Você sabe alguma coisa sobre ética das virtudes<|endoftext|>
  example_title: Ética das virtudes
- text: <|startoftext|>O que posso fazer para alegrar minha namorada?<|endoftext|>
  example_title: Conselho
inference:
  parameters:
    repetition_penalty: 1.2
    temperature: 0.2
    top_k: 30
    top_p: 0.3
    max_length: 100
    length_penalty: 0.3
    early_stopping: True
---
# Aira-Instruct-PT-1B7 (Portuguese)

`Aira-Instruct-PT-1B7` is a instruction-tuned GPT-style model based on [BLOOM](https://huggingface.co/bigscience/bloom-1b7). The model was trained with a dataset composed of `prompt`, `completions`, generated via the [Self-Instruct](https://github.com/yizhongw/self-instruct) framework. `Aira-Instruct-PT-1B7` instruction-tuning was achieved via conditional text generation.

The dataset used to train this model combines the following sources of data: the [`synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) dataset, the [`databricks_dolly_15k`](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset, the [`instruction-dataset`](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) dataset, and a subset of [Aira's](https://github.com/Nkluge-correa/Aira-EXPERT) fine-tuning dataset, focused on Q&A related to Ethics, AI, AI safety, and other related topics. The dataset is available in both Portuguese and English.

Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo-Portuguese).

## Details

- **Size:** 559,012,864 total parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/fine-tuning-instruct-aira)
- **Language:** Portuguese
- **Number of Epochs:** 2
- **Batch size:** 6
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB

| Epoch|Training Loss|Validation Loss|
|---|---|---|
| 1 |0.888486|0.744728|
| 2 |0.749612|0.673719|

This repository has the notebook used to train this model.

## Usage

Two special tokens are used to mark the user side of the interaction and the model's response:

`<|startoftext|>`What is a language model?`<|endoftext|>`A language model is a probability distribution over a vocabulary.`<|endoftext|>`

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = torch.device("cuda"  if torch.cuda.is_available() else  "cpu")

tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-Instruct-PT-1B7')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-Instruct-PT-1B7')

aira.eval()
aira.to(device)

question =  input("Enter your question: ")

inputs = tokenizer(tokenizer.bos_token + question + tokenizer.eos_token, return_tensors="pt").to(device)

responses = aira.generate(**inputs,
	bos_token_id=tokenizer.bos_token_id,
	pad_token_id=tokenizer.pad_token_id,
	eos_token_id=tokenizer.eos_token_id,
	do_sample=True,
	top_k=50,
	max_length=200,
	top_p=0.95,
	temperature=0.7,
	num_return_sequences=2)

print(f"Question: 👤 {question}\n")

for i, response in  enumerate(responses):
	print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```

The model will output something like:

```markdown
>>> Question: 👤 Olá! Como você se chama?

>>>Response 1: 🤖 Olá! Meu nome é Aira e sou um chatbot projetado para conversar sobre Ética e Segurança da IA. Se você precisar de ajuda com um assunto diferente, por favor, peça "ajuda".
>>>Response 2: 🤖 Olá! Meu nome é Aira e sou um chatbot treinado para responder perguntas sobre Ética e Segurança da IA. Se você precisar de ajuda para navegar em nossa conversa, não hesite em pedir ajuda.
```

## Limitations

🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.

🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.

## Cite as 🤗

```latex

@misc{nicholas22aira,
  doi = {10.5281/zenodo.6989727},
  url = {https://huggingface.co/nicholasKluge/Aira-Instruct-PT-560M},
  author = {Nicholas Kluge Corrêa and Carolina Del Pino},
  title = {Aira},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
}

```

## License

The `Aira-Instruct-PT-1B7` is licensed under the RAIL License since it is a model derived from BLOOM. See the [LICENSE](LICENSE) file for more details.