--- license: apache-2.0 datasets: - ruanchaves/faquad-nli language: - pt metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - textual-entailment --- # TeenyTinyLlama-162m-FAQUAD TeenyTinyLlama is a series of small foundational models trained on Portuguese. This repository contains a version of [TeenyTinyLlama-162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) fine-tuned on the [FAQUAD dataset](). ## Reproducing ```python # Faquad-nli ! pip install transformers datasets evaluate accelerate -q import evaluate import numpy as np from huggingface_hub import login from datasets import load_dataset, Dataset, DatasetDict from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer # Basic fine-tuning arguments token="your_token" task="ruanchaves/faquad-nli" model_name="nicholasKluge/Teeny-tiny-llama-162m" output_dir="checkpoint" learning_rate=4e-5 per_device_train_batch_size=16 per_device_eval_batch_size=16 num_train_epochs=3 weight_decay=0.01 evaluation_strategy="epoch" save_strategy="epoch" hub_model_id="nicholasKluge/Teeny-tiny-llama-162m-faquad" # Login on the hub to load and push login(token=token) # Load the task dataset = load_dataset(task) # Create a `ModelForSequenceClassification` model = AutoModelForSequenceClassification.from_pretrained( model_name, num_labels=2, id2label={0: "UNSUITABLE", 1: "SUITABLE"}, label2id={"UNSUITABLE": 0, "SUITABLE": 1} ) tokenizer = AutoTokenizer.from_pretrained(model_name) # If model does not have a pad_token, we need to add it #tokenizer.pad_token = tokenizer._eos_token #model.config.pad_token_id = model.config.eos_token_id # Preprocess if needed train = dataset['train'].to_pandas() train['text'] = train['question'] + tokenizer.bos_token + train['answer'] + tokenizer.eos_token train = train[['text', 'label']] train.labels = train.label.astype(int) train = Dataset.from_pandas(train) test = dataset['test'].to_pandas() test['text'] = test['question'] + tokenizer.bos_token + test['answer'] + tokenizer.eos_token test = test[['text', 'label']] test.labels = test.label.astype(int) test = Dataset.from_pandas(test) dataset = DatasetDict({ "train": train, "test": test }) # Pre process the dataset def preprocess_function(examples): return tokenizer(examples["text"], truncation=True) dataset_tokenized = dataset.map(preprocess_function, batched=True) # Create a simple data collactor data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # Use accuracy as evaluation metric accuracy = evaluate.load("accuracy") # Function to compute accuracy def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) # Define training arguments training_args = TrainingArguments( output_dir=output_dir, learning_rate=learning_rate, per_device_train_batch_size=per_device_train_batch_size, per_device_eval_batch_size=per_device_eval_batch_size, num_train_epochs=num_train_epochs, weight_decay=weight_decay, evaluation_strategy=evaluation_strategy, save_strategy=save_strategy, load_best_model_at_end=True, push_to_hub=True, hub_token=token, hub_private_repo=True, hub_model_id=hub_model_id, tf32=True, ) # Define the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset_tokenized["train"], eval_dataset=dataset_tokenized["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # Train! trainer.train() ```