Fine-tuning using transformers trainer

#2
by naif576 - opened

hello there,
how can I fine tune this model using the trainer class by huggingface transformers?
What will be the input shape to the tokenizer.?

If you mean fine-tune a model based on this model, then you can refer to https://github.com/yangheng95/PyABSA/blob/v2/examples-v2/aspect_polarity_classification/Aspect_Sentiment_Classification.ipynb. Or if you want to fine-tune this model only, you may need to check the tutorials porvided by huggingface.

Yes I meant fine-tuning this model
But I am struggling with input.
How would combine the sentence with the aspect?
If you can provide a sample example of training this model using huggingface trainer, that would be helpful

I agree with @naif576 , Is there any example direction how we can use hugging face framework to fine-tune the model, and how input data look like, also, is there any concise documentation regarding quadruple extraction,, how to train the model and how training input data will look like as well. Great work indeed @yangheng

I agree with @naif576 , Is there any example direction how we can use hugging face framework to fine-tune the model, and how input data look like, also, is there any concise documentation regarding quadruple extraction,, how to train the model and how training input data will look like as well. Great work indeed @yangheng

Acutally, this model is only used for aspect sentiment classification. Any task else you need to refer to the turorials in PyABSA.

Ok, how can we train this model for aspect sentiment classification using the hugging face framework
And thanks for responding.

import torch
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
from transformers import AutoTokenizer, AutoModelForSequenceClassification, AdamW
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler

Set the batch size

batch_size = 16

from datasets import load_dataset

Load the SST-2 dataset from the GLUE benchmark

dataset = load_dataset('glue', 'sst2')

Extract the texts and labels

train_texts = dataset['train']['sentence']
train_labels = dataset['train']['label']
test_texts = dataset['validation']['sentence']
test_labels = dataset['validation']['label']

Load the tokenizer

tokenizer = AutoTokenizer.from_pretrained('yangheng/deberta-v3-base-absa-v1.1')

Tokenize the data

train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=80)
test_encodings = tokenizer(test_texts, truncation=True, padding=True, max_length=80)

train_dataset = TensorDataset(
torch.tensor(train_encodings['input_ids']),
torch.tensor(train_encodings['attention_mask']),
torch.tensor(train_labels)
)

test_dataset = TensorDataset(
torch.tensor(test_encodings['input_ids']),
torch.tensor(test_encodings['attention_mask']),
torch.tensor(test_labels)
)

Create the data loaders

train_sampler = RandomSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=batch_size)

test_sampler = SequentialSampler(test_dataset)
test_dataloader = DataLoader(test_dataset, sampler=test_sampler, batch_size=batch_size)

Load the model

model = AutoModelForSequenceClassification.from_pretrained('yangheng/deberta-v3-base-absa-v1.1')

model.to('cuda')

Set the optimizer and learning rate

optimizer = AdamW(model.parameters(), lr=5e-5)

Train the model

model.train()

for epoch in range(3):
for batch in train_dataloader:
inputs = {
'input_ids': batch[0].to('cuda'),
'attention_mask': batch[1].to('cuda'),
'labels': batch[2].to('cuda')
}

    optimizer.zero_grad()

    outputs = model(**inputs)

    loss = outputs.loss
    loss.backward()

    optimizer.step()

Evaluate the model

model.eval()

with torch.no_grad():
for batch in test_dataloader:
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2]
}

    outputs = model(**inputs)

    logits = outputs.logits
    predictions = torch.argmax(logits, dim=-1)

    with torch.no_grad():
        all_predictions = []
        all_labels = []

        for batch in test_dataloader:
            inputs = {
                'input_ids': batch[0],
                'attention_mask': batch[1],
                'labels': batch[2]
            }

            outputs = model(**inputs)

            logits = outputs.logits
            predictions = torch.argmax(logits, dim=-1)

            all_predictions.extend(predictions.tolist())
            all_labels.extend(inputs['labels'].tolist())

    accuracy = accuracy_score(all_labels, all_predictions)
    precision, recall, f1, _ = precision_recall_fscore_support(all_labels, all_predictions, average='binary')
    print(f'Accuracy: {accuracy}')
    print(f'Precision: {precision}')
    print(f'Recall: {recall}')
    print(f'F1: {f1}')

@naif576 @Abesadi Please see the example. However, I think it will be difficult for you to prepare the dataset.

thank you so much
that helped me a lot
about the dataset, preparing the input is the main issue I am facing
I could not figure out how to combine the sentence with an aspect to feed it to the model for training.

If you can edit the example and use it for aspect-based dataset

@yangheng can you please confirm if input should be like below for above code script you shared:

I am an engineer and I use matlab and stata for data analysis and currently taking Machine Learning course $T$ by Stanford which is fabulous .
course
Positive

@Abesadi NO, it sould be:

I am an engineer and I use matlab and stata for data analysis and currently taking Machine Learning $T$ by Stanford which is fabulous .
course
Positive

ref: https://github.com/yangheng95/ABSADatasets/issues/47

@yangheng hello,good evening.I try to run the code you have written above,but meet this error:

ModuleNotFoundError Traceback (most recent call last)
Cell In[3], line 9
4 from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
7 batch_size = 16
----> 9 from dataset import load_dataset

ModuleNotFoundError: No module named 'dataset'

Could you tell me how to deal with it,please? Thank you for reading and replying.

@sunybright

run
pip install datasets

Hello,could you help me to ensure whether I feed the data to the model in the right way?
I have seen the train data given online,which looks like:

Mmmm& I forgot how much I love having a $T$ . Even if it is slightly pointless !
ipad
Positive

But I don't konw how to feed data of this form to the model .Thus I only input the sentence and the label into the model,
the sentence(before encoding) is like(just replace the aspect with $T$):
my day off after a wedding consist of $T$ zelda and chinese food.
and the labels are 0 or 1 or 2(not one_hot vector)

Thanks for your reading and replying.

oh,sorry,maybe I have not made myself clear. I want to fine-tune the model to fit on my own dataset,it seems that the method given in https://github.com/yangheng95/PyABSA/blob/v2/examples-v2/aspect_polarity_classification/inference.py doesn't give an example of fine-tuning a model.I follow the code you have written in your previous notations(https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1/discussions/2#6491666b8d62a5e264bf7154) but don't know what train_texts and test_texts should be looked like?
Thank you for helping.

Sign up or log in to comment