This model is a Candidate Set Generator in "CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022.
Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by CLOTH dataset based on bert-base-uncased model.
For more details, you can see our paper or GitHub.
How to use?
- Download the model by hugging face transformers.
from transformers import BertTokenizer, BertForMaskedLM, pipeline tokenizer = BertTokenizer.from_pretrained("AndyChiang/cdgp-csg-bert-cloth") csg_model = BertForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-bert-cloth")
- Create a unmasker.
unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10)
- Use the unmasker to generate the candidate set of distractors.
sent = "I feel [MASK] now. [SEP] happy" cs = unmasker(sent) print(cs)
This model is fine-tuned by CLOTH dataset, which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below.
|Number of questions||Train||Valid||Test|
You can also use the dataset we have already cleaned.
We use a special way to fine-tune model, which is called "Answer-Relating Fine-Tune". More detail is in our paper.
The following hyperparameters were used during training:
- Pre-train language model: bert-base-uncased
- Optimizer: adam
- Learning rate: 0.0001
- Max length of input: 64
- Batch size: 64
- Epoch: 1
- Device: NVIDIA® Tesla T4 in Google Colab
The evaluations of this model as a Candidate Set Generator in CDGP is as follows:
Candidate Set Generator
- Downloads last month