File size: 3,752 Bytes
28c8f56
 
 
 
 
 
 
 
 
 
 
 
 
2fe8f8b
28c8f56
2fe8f8b
28c8f56
2fe8f8b
28c8f56
2fe8f8b
28c8f56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
license: apache-2.0
datasets:
- ruanchaves/faquad-nli
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- textual-entailment
---
# TeenyTinyLlama-162m-FAQUAD

TeenyTinyLlama is a series of small foundational models trained on Portuguese.

This repository contains a version of [TeenyTinyLlama-162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) fine-tuned on the [FAQUAD dataset]().

## Reproducing

```python
# Faquad-nli
! pip install transformers datasets evaluate accelerate -q

import evaluate
import numpy as np
from huggingface_hub import login
from datasets import load_dataset, Dataset, DatasetDict
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer

# Basic fine-tuning arguments
token="your_token"
task="ruanchaves/faquad-nli"
model_name="nicholasKluge/Teeny-tiny-llama-162m"
output_dir="checkpoint"
learning_rate=4e-5
per_device_train_batch_size=16
per_device_eval_batch_size=16
num_train_epochs=3
weight_decay=0.01
evaluation_strategy="epoch"
save_strategy="epoch"
hub_model_id="nicholasKluge/Teeny-tiny-llama-162m-faquad"

# Login on the hub to load and push
login(token=token)

# Load the task
dataset = load_dataset(task)

# Create a `ModelForSequenceClassification`
model = AutoModelForSequenceClassification.from_pretrained(
    model_name, 
    num_labels=2, 
    id2label={0: "UNSUITABLE", 1: "SUITABLE"}, 
    label2id={"UNSUITABLE": 0, "SUITABLE": 1}
)

tokenizer = AutoTokenizer.from_pretrained(model_name)

# If model does not have a pad_token, we need to add it
#tokenizer.pad_token = tokenizer._eos_token
#model.config.pad_token_id = model.config.eos_token_id

# Preprocess if needed
train = dataset['train'].to_pandas()
train['text'] = train['question'] + tokenizer.bos_token + train['answer'] + tokenizer.eos_token
train = train[['text', 'label']]
train.labels = train.label.astype(int)
train = Dataset.from_pandas(train)

test = dataset['test'].to_pandas()
test['text'] = test['question'] + tokenizer.bos_token + test['answer'] + tokenizer.eos_token
test = test[['text', 'label']]
test.labels = test.label.astype(int)
test = Dataset.from_pandas(test)

dataset = DatasetDict({
    "train": train,  
    "test": test                  
})

# Pre process the dataset
def preprocess_function(examples):
    return tokenizer(examples["text"], truncation=True)

dataset_tokenized = dataset.map(preprocess_function, batched=True)

# Create a simple data collactor
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)

# Use accuracy as evaluation metric
accuracy = evaluate.load("accuracy")

# Function to compute accuracy
def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = np.argmax(predictions, axis=1)
    return accuracy.compute(predictions=predictions, references=labels)

# Define training arguments
training_args = TrainingArguments(
    output_dir=output_dir,
    learning_rate=learning_rate,
    per_device_train_batch_size=per_device_train_batch_size,
    per_device_eval_batch_size=per_device_eval_batch_size,
    num_train_epochs=num_train_epochs,
    weight_decay=weight_decay,
    evaluation_strategy=evaluation_strategy,
    save_strategy=save_strategy,
    load_best_model_at_end=True,
    push_to_hub=True,
    hub_token=token,
    hub_private_repo=True,
    hub_model_id=hub_model_id,
    tf32=True,
)

# Define the Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=dataset_tokenized["train"],
    eval_dataset=dataset_tokenized["test"],
    tokenizer=tokenizer,
    data_collator=data_collator,
    compute_metrics=compute_metrics,
)

# Train!
trainer.train()

```