Aira-2-774M / README.md
nicholasKluge's picture
Update README.md
b725d89
|
raw
history blame
No virus
4.85 kB
---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>What is your name?<|endofinstruction|>"
example_title: Greetings
- text: "<|startofinstruction|>Can you explain what is Machine Learning?<|endofinstruction|>"
example_title: Machine Learning
- text: "<|startofinstruction|>Do you know anything about virtue ethics?<|endofinstruction|>"
example_title: Ethics
- text: "<|startofinstruction|>How can I make my girlfriend happy?<|endofinstruction|>"
example_title: Advise
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_length: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 0.77
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-774M
`Aira-2` is the second version of the Aira instruction-tuned series. `Aira-2-774M` is an instruction-tuned GPT-style model based on [GPT-2](https://huggingface.co/gpt2-large). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 774,032,640 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** English
- **Number of Epochs:** 3
- **Batch size:** 8
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.77 KgCO2 (Singapore)
- **Total Energy Consumption:** 1.58 kWh
This repository has the [notebook](AIRA_FineTuning.ipynb) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`What is a language model?`<|endofinstruction|>`A language model is a probability distribution over a vocabulary.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-774M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-774M')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token, return_tensors="pt").to(device)
responses = aira.generate(**inputs,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
top_k=50,
max_length=500,
top_p=0.95,
temperature=0.7,
num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 What is the capital of Brazil?
>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.
```
## Limitations
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.
🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.
## Evaluation
| Model|Average|[ARC](https://arxiv.org/abs/1803.05457)|[HellaSwag](https://arxiv.org/abs/1905.07830)|[MMLU](https://arxiv.org/abs/2009.03300)|[TruthfulQA](https://arxiv.org/abs/2109.07958)|
|---|---|---|---|---|---|
| [Aira-2-774M](https://huggingface.co/nicholasKluge/Aira-2-774M) |34.00|**28.75**|40.80|25.10|**41.33**|
| GPT-2-large | **34.08** | 25.94 | **45.60** | **26.08** | 38.71 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). The notebook used to make these evaluations is available in the [this repo](lm_evaluation_harness.ipynb).
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://huggingface.co/nicholasKluge/Aira-2-774M},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
}
```
## License
The `Aira-2-774M` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.