|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Dahoas/synthetic-instruct-gptj-pairwise |
|
- databricks/databricks-dolly-15k |
|
- HuggingFaceH4/instruction-dataset |
|
- nicholasKluge/instruct-aira-dataset |
|
language: |
|
- pt |
|
metrics: |
|
- bleu |
|
library_name: transformers |
|
tags: |
|
- alignment |
|
- instruction tuned |
|
- text generation |
|
- conversation |
|
- assistant |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: <|startoftext|>Olá! Qual o seu nome?<|endoftext|> |
|
example_title: Olá |
|
- text: >- |
|
<|startoftext|>Você pode me explicar o que é aprendizagem de |
|
máquina?<|endoftext|> |
|
example_title: Aprendizagem de máquina |
|
- text: <|startoftext|>Você sabe alguma coisa sobre ética das virtudes<|endoftext|> |
|
example_title: Ética das virtudes |
|
- text: <|startoftext|>O que posso fazer para alegrar minha namorada?<|endoftext|> |
|
example_title: Conselho |
|
inference: |
|
parameters: |
|
repetition_penalty: 1.2 |
|
temperature: 0.2 |
|
top_k: 30 |
|
top_p: 0.3 |
|
max_length: 200 |
|
length_penalty: 0.3 |
|
early_stopping: true |
|
--- |
|
# Aira-Instruct-PT-124M (Portuguese) |
|
|
|
`Aira-Instruct-PT-124M` is a instruction-tuned GPT-style model based on [GPT-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese). The model was trained with a dataset composed of `prompt`, `completions`, generated via the [Self-Instruct](https://github.com/yizhongw/self-instruct) framework. `Aira-Instruct-PT-124M` instruction-tuning was achieved via conditional text generation. |
|
|
|
The dataset used to train this model combines the following sources of data: the [`synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) dataset, the [`databricks_dolly_15k`](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset, the [`instruction-dataset`](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) dataset, and a subset of [Aira's](https://github.com/Nkluge-correa/Aira-EXPERT) fine-tuning dataset, focused on Q&A related to Ethics, AI, AI safety, and other related topics. The dataset is available in both Portuguese and English. |
|
|
|
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo). |
|
|
|
## Details |
|
|
|
- **Size:** 124,441,344 parameters |
|
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset) |
|
- **Language:** Portuguese |
|
- **Number of Epochs:** 5 |
|
- **Batch size:** 32 |
|
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8) |
|
- **GPU:** 1 NVIDIA A100-SXM4-40GB |
|
- **Emissions:** 0.0009 KgCO2 (Canada) |
|
- **Total Energy Consumption:** 0.41 kWh |
|
|
|
| Epoch/Loss|Training|Validation| |
|
|---|---|---| |
|
| 1 |0.947100|0.774946| |
|
| 2 |0.737357|0.730962| |
|
| 3 |0.657410|0.710232| |
|
| 4 |0.597437|0.705064| |
|
| 5 |0.551684|0.704830| |
|
|
|
This repository has the notebook used to train this model. |
|
|
|
## Usage |
|
|
|
Two special tokens are used to mark the user side of the interaction and the model's response: |
|
|
|
`<|startoftext|>`What is a language model?`<|endoftext|>`A language model is a probability distribution over a vocabulary.`<|endoftext|>` |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
|
|
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-Instruct-PT-124M') |
|
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-Instruct-PT-124M') |
|
|
|
aira.eval() |
|
aira.to(device) |
|
|
|
question = input("Enter your question: ") |
|
|
|
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.eos_token, return_tensors="pt").to(device) |
|
|
|
responses = aira.generate(**inputs, |
|
bos_token_id=tokenizer.bos_token_id, |
|
pad_token_id=tokenizer.pad_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
do_sample=True, |
|
top_k=50, |
|
max_length=200, |
|
top_p=0.95, |
|
temperature=0.7, |
|
num_return_sequences=2) |
|
|
|
print(f"Question: 👤 {question}\n") |
|
|
|
for i, response in enumerate(responses): |
|
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}') |
|
``` |
|
|
|
The model will output something like: |
|
|
|
```markdown |
|
>>> Question: 👤 Olá! Como você se chama? |
|
|
|
>>>Response 1: 🤖 Olá! Meu nome é Aira e sou um chatbot projetado para conversar sobre Ética e Segurança da IA. Se você precisar de ajuda com um assunto diferente, por favor, peça "ajuda". |
|
>>>Response 2: 🤖 Olá! Meu nome é Aira e sou um chatbot treinado para responder perguntas sobre Ética e Segurança da IA. Se você precisar de ajuda para navegar em nossa conversa, não hesite em pedir ajuda. |
|
``` |
|
|
|
## Limitations |
|
|
|
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful. |
|
|
|
🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes. |
|
|
|
## Cite as 🤗 |
|
|
|
```latex |
|
|
|
@misc{nicholas22aira, |
|
doi = {10.5281/zenodo.6989727}, |
|
url = {https://huggingface.co/nicholasKluge/Aira-Instruct-PT-124M}, |
|
author = {Nicholas Kluge Corrêa and Carolina Del Pino}, |
|
title = {Aira}, |
|
year = {2023}, |
|
publisher = {HuggingFace}, |
|
journal = {HuggingFace repository}, |
|
} |
|
|
|
``` |
|
|
|
## License |
|
|
|
The `Aira-Instruct-PT-124M` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. |