# %% [markdown]
# # Text classification 
# 
# Let's learn how to finetune the pre-trained BERT for text classification tasks. Say, we are performing sentiment analysis. In the sentiment analysis, our goal is to classify whether a sentence is positive or negative. Suppose, we have a dataset containing sentences along with their labels. 
# 
# Consider a sentence: 'I love Pairs'. First, we tokenize the sentence, add the [CLS] token at the beginning, and [SEP] token at the end of the sentence. Then, we feed the tokens as an input to the pre-trained BERT and get the embeddings of all the tokens. 
# 
# Next, we ignore the embedding of all other tokens and take only the embedding of [CLS] token which is $R_{[CLS]}$. The embedding of the [CLS] token will hold the aggregate representation of the sentence. We feed $R_{[CLS]}$ to a classifier (feed-forward network with softmax function) and train the classifier to perform sentiment analysis. 
# 
# Wait! How does it differ from what we saw at the beginning of the section. How finetuning the pre-trained BERT differs from using the pre-trained BERT as a feature extractor?
# 
# In "Extracting embeddings from pre-trained BERT" section, we learned that after extracting the embedding $R_{[CLS]}$ of a sentence, we feed the $R_{[CLS]}$ to a classifier and train the classifier to perform classification. Similarly, during finetuning, we feed the embedding of $R_{[CLS]}$ to a classifier and train the classifier to perform classification.
# 
# The difference is that when we finetune the pre-trained BERT, we can update the weights of the pre-trained BERT along with a classifier. But when we use the pre-trained BERT as a feature extractor, we can update only the weights of a classifier and not the pre-trained BERT. 
# 
# During finetuning, we can adjust the weights of the model in the following two ways:
# 
# - Update the weights of the pre-trained BERT along with the classification layer 
# - Update only the weights of the classification layer and not the pre-trained BERT. When we do this, it becomes the same as using the pre-trained BERT as a feature extractor
# 
# The following figure shows how we finetune the pre-trained BERT for the sentiment analysis task:
# 
# 
# ![title](images/6.png)
# 
# As we can observe from the preceding figure, we feed the tokens to the pre-trained BERT and get the embedding of all the tokens. We take the embedding of [CLS] token and feed it to a feedforward network with a softmax function and perform classification. 
# 
# Let's get a better understanding of how finetuning works by getting hands-on with finetuning the pre-trained BERT for sentiment analysis in the next section. 

# %%
from transformers import BertForSequenceClassification, BertTokenizerFast, Trainer, TrainingArguments
import datasets
import torch
import numpy as np
import os
from datasets_study import test_local_dataset

# %%
# 数据集中标签字段名称必须是label, 否则trainer会报错
my_dataset_all = test_local_dataset().rename_columns({"review":"text", "sentiment":"label"})
train_set = my_dataset_all['train']
test_set = my_dataset_all['test']

# %% [markdown]
# 
# Next, let's download and load the pre-trained BERT model. In this example, we use the pre-trained bert-base-uncased model. As we can observe below, since we are performing sequence classification, we use the BertForSequenceClassification class: 
# 

# %%
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')

# %% [markdown]
# 
# Next, we download and load the tokenizer which is used for pretraining the bert-base-uncased model.
# As we can observe, we create the tokenizer using the BertTokenizerFastclass instead of BertTokenizer. The BertTokenizerFast class has many advantages compared to BertTokenizer. We will learn about this in the next section: 
# 

# %%
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')

# %% [markdown]
# 
# Now that we loaded the dataset and model, next let's preprocess the dataset. 

# %% [markdown]
# ## Preprocess the dataset
# We can preprocess the dataset in a quicker way using our tokenizer. For example, consider the sentence: 'I love Paris'.  
# 
# First, we tokenize the sentence and add the [CLS] token at the beginning and [SEP] token at the end as shown below: 
# 

# %% [markdown]
# tokens = [ [CLS], I, love, Paris, [SEP] ]

# %% [markdown]
# 
# Next, we map the tokens to the unique input ids (token ids). Suppose the following are the unique input ids (token ids):
# 

# %% [markdown]
# input_ids = [101, 1045, 2293, 3000, 102]

# %% [markdown]
# Then, we need to add the segment ids (token type ids). Wait, what are segment ids? Suppose we have two sentences in the input. In that case, segment ids are used to distinguish one sentence from the other. All the tokens from the first sentence will be mapped to 0 and all the tokens from the second sentence will be mapped to 1. Since here we have only one sentence, all the tokens will be mapped to 0 as shown below:
# 

# %% [markdown]
# token_type_ids = [0, 0, 0, 0, 0]

# %% [markdown]
# 
# Now, we need to create the attention mask. We know that an attention mask is used to differentiate the actual tokens and [PAD] tokens. It will map all the actual tokens to 1 and the [PAD] tokens to 0. Suppose, our tokens length should be 5. Now, our tokens list has already 5 tokens. So, we don't have to add [PAD] token. Then our attention mask will become: 
# 

# %% [markdown]
# attention_mask = [1, 1, 1, 1, 1]

# %% [markdown]
# 
# That's it. But instead of doing all the above steps manually, our tokenizer will do these steps for us. We just need to pass the sentence to the tokenizer as shown below: 
# 

# %%
tokenizer('I love Paris')

# %% [markdown]
# 
# With the tokenizer, we can also pass any number of sentences and perform padding dynamically. To do that, we need to set padding to True and also the maximum sequence length. For instance, as shown below, we pass three sentences and we set the maximum sequence length, max_length to 5:
# 

# %%
tokenizer(['I love Paris', 'birds fly','snow fall'], padding = True, max_length=5)

# %% [markdown]
# 
# That's it, with the tokenizer, we can easily preprocess our dataset. So we define a function called preprocess for processing the dataset as shown below: 
# 

# %% [markdown]
# 
# Now, we preprocess the train and test set using the preprocess function: 
# 

# %%
def preprocess(data):
    return tokenizer(data['text'], padding=True, truncation=True)

# %%
import sys

train_set = train_set.map(preprocess, batched=True, batch_size=len(train_set))
test_set = test_set.map(preprocess, batched=True, batch_size=len(test_set))

print(train_set.info)
print(test_set.info)

# %% [markdown]
# 
# Next, we use the set_format function and select the columns which we need in our dataset and also in which format we need them as shown below:  
# 

# %%
train_set.set_format('torch', columns=['input_ids', 'attention_mask', 'label'])
test_set.set_format('torch', columns=['input_ids', 'attention_mask', 'label'])

# %% [markdown]
# That's it. Now that we have the dataset ready, let's train the model. 

# %% [markdown]
# ## Training the model 

# %% [markdown]
# 
# Define the batch size and epoch size: 

# %%
batch_size = 16 # 设置batch_size为8, 即每次训练8个样本
epochs = 5 # 设置epochs为3, 即训练3个epoch, 一个epoch表示使用训练集中的所有样本训练一次

# %% [markdown]
# 
# Define the warmup steps and weight decay:

# %%
# warmup_steps = 500
warmup_steps = 5 # 设置warmup_steps为10，即前10个step用来warmup
weight_decay = 0.01 # 设置weight_decay为0.01, 即使用0.01的学习率衰减

# %% [markdown]
# 
# Define the training arguments:

# %%
training_args = TrainingArguments(
    output_dir='./results',
    num_train_epochs=epochs,
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    warmup_steps=warmup_steps,
    weight_decay=weight_decay,
    evaluation_strategy='no', # [no,steps,epoch]设置evaluation_strategy为'epoch'，每隔一个epoch评估一次模型
    # save_strategy='steps',  # 设置save_strategy为'steps'，每隔500步保存一次模型
    # save_steps=500,  # 设置save_steps为500，每隔500步保存一次模型
    save_strategy='epoch',  # 设置save_strategy为'epoch'，每隔一个epoch保存一次模型
    logging_dir='./logs',
)

# %% [markdown]
# 
# 
# Now define the trainer: 

# %%
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_set,
    eval_dataset=test_set
)
    

# %% [markdown]
# 
# 
# Start training the model:

# %%
trainer.train()

# %% [markdown]
# 
# After training we can evaluate the model using the evaluate function:

# %%
eval_result = trainer.evaluate()
print(f'eval_result: {eval_result}')

# %% 
# 测试微调后的模型
import sys, os
sys.path.insert(0, os.path.dirname(__file__) + "/..")
from utils.base import setup_env, setup_workdir
from transformers import BertForSequenceClassification, BertTokenizerFast

setup_workdir(os.path.dirname(__file__))

model_path = './results/checkpoint-25'
model = BertForSequenceClassification.from_pretrained(model_path)

# tokenizer_path = './results/checkpoint-18'
# tokenizer = BertTokenizerFast.from_pretrained(tokenizer_path)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')

# 使用加载的模型和tokenizer进行推理
text = "We are very happy to show you the 🤗 Transformers library."
encoded_input = tokenizer(text, truncation=True, padding=True, return_tensors='pt')
output = model(**encoded_input)
output = model(encoded_input['input_ids'], encoded_input['attention_mask'])
# 处理输出
logits = output.logits
preds = logits.argmax(-1)

# 输出预测结果
print(preds)

# %%
# 通过pipeline进行推理
import sys, os
sys.path.insert(0, os.path.dirname(__file__) + "/..")
from utils.base import setup_env, setup_workdir
from transformers import pipeline
from transformers import BertForSequenceClassification, BertTokenizerFast

setup_workdir(os.path.dirname(__file__))

model_path = './results/checkpoint-25'
# model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
# 使用pipeline加载本地微调模型
classifier = pipeline(
    "text-classification",
    model=model_path,
    # model=model,
    tokenizer=tokenizer,
    framework="pt"  # 指定使用PyTorch
)
# classifier = pipeline("sentiment-analysis")
# 进行推断
text = "We are very happy to show you the 🤗 Transformers library."
result = classifier(text)

# 输出结果
print(f'text: {text}\nresult: {result}')

# %%
