cbdb's picture
update README text color
4206a1c
|
raw
history blame
4.9 kB
metadata
language:
  - zh
tags:
  - SequenceClassification
  - 古文
  - 文言文
  - ancient
  - classical
  - letter
  - 书信标题
license: cc-by-nc-sa-4.0

BertForSequenceClassification model (Classical Chinese)

Open In Colab

This BertForSequenceClassification Classical Chinese model is intended to predict whether a Classical Chinese sentence is a letter title (书信标题) or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.

  • Labels: 0 = non-letter, 1 = letter

Model description

The BertForSequenceClassification model architecture inherits the BERT base model and concatenates a fully-connected linear layer to perform a binary-class classification task.More precisely, it was pretrained with two objectives:

  • Masked language modeling (MLM): The masked language modeling architecture randomly masks 15% of the words in the inputs, and the model is trained to predict the masked words. The BERT base model uses this MLM architecture and is pre-trained on a large corpus of data. BERT is proven to produce robust word embedding and can capture rich contextual and semantic relationships. Our model inherits the publicly available pre-trained BERT Chinese model trained on modern Chinese data. To perform a Classical Chinese letter classification task, we first finetuned the model using a large corpus of Classical Chinese data (3GB textual data), and then connected it to the BertForSequenceClassification architecture for Classical Chinese letter classification.

  • Sequence classification: the model concatenates a fully-connected linear layer to output the probability of each class. In our binary classification task, the final linear layer has two classes.

Intended uses & limitations

Note that this model is primiarly aimed at predicting whether a Classical Chinese sentence is a letter title (书信标题) or not.

How to use

Note that this model is primiarly aimed at predicting whether a Classical Chinese sentence is a letter title (书信标题) or not.

Here is how to use this model to get the features of a given text in PyTorch:

1. Import model and packages

from transformers import BertTokenizer
from transformers import BertForSequenceClassification
import torch
from numpy import exp
import numpy as np

tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertForSequenceClassification.from_pretrained('cbdb/ClassicalChineseLetterClassification',
                                                     output_attentions=False,
                                                     output_hidden_states=False)

2. Make a prediction

max_seq_len = 512

def softmax(vector):
    e = exp(vector)
    return e / e.sum()
 
def predict_class(test_sen):
  tokens_test = tokenizer.encode_plus(
      test_sen, 
      add_special_tokens=True, 
      return_attention_mask=True, 
      padding=True, 
      max_length=max_seq_len, 
      return_tensors='pt',
      truncation=True
  )

  test_seq = torch.tensor(tokens_test['input_ids'])
  test_mask = torch.tensor(tokens_test['attention_mask'])

  # get predictions for test data
  with torch.no_grad():
    outputs = model(test_seq, test_mask)
    outputs = outputs.logits.detach().cpu().numpy()

  softmax_score = softmax(outputs)
  pred_class_dict = {k:v for k, v in zip(label2idx.keys(), softmax_score[0])}
  return pred_class_dict

label2idx = {'not-letter': 0,'letter': 1}
idx2label = {v:k for k,v in label2idx.items()}

3. Change your sentence here

label2idx = {'not-letter': 0,'letter': 1}
idx2label = {v:k for k,v in label2idx.items()}

test_sen = '上丞相康思公書'
pred_class_proba = predict_class(test_sen)
print(f'The predicted probability for the {list(pred_class_proba.keys())[0]} class: {list(pred_class_proba.values())[0]}')
print(f'The predicted probability for the {list(pred_class_proba.keys())[1]} class: {list(pred_class_proba.values())[1]}')
>>> The predicted probability for the not-letter class: 0.002029061783105135
>>> The predicted probability for the letter class: 0.9979709386825562
pred_class = idx2label[np.argmax(list(pred_class_proba.values()))]
print(f'The predicted class is: {pred_class}')
>>> The predicted class is: letter