--- language: Chinese widget: - text: "小王在哪上学?" context: "小王在北京上学,他今年二十岁。" --- # Chinese RoBERTa Base Model for QA ## Model description The model is used for extractive question answering. You can download the model from the link [roberta-base-chinese-extractive-qa](https://huggingface.co/uer/roberta-base-chinese-extractive-qa). ## How to use You can use the model directly with a pipeline for extractive question answering: ```python >>> from transformers import pipeline >>> path = 'uer/roberta-base-chinese-extractive-qa' >>> nlp = pipeline('question-answering', model=path, tokenizer=path) >>> QA_input = {'question': "小王在哪上学?",'context': "小王在北京上学,他今年二十岁。"} >>> nlp(QA_input) {'score': 0.7618623375892639, 'start': 3, 'end': 5, 'answer': '北京'} ``` ## Training data Training data contains three datasets ,including [cmrc2018](https://github.com/ymcui/cmrc2018), [webqa](https://spaces.ac.cn/archives/4338) and [莱斯杯](https://www.kesci.com/home/competition/5d142d8cbb14e6002c04e14a/content/0). ## Training procedure The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). ``` python3 run_cmrc.py --dataset_path lyric_dataset.pt \ --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \ --vocab_path models/google_zh_vocab.txt \ --train_path extractive_qa.json \ --dev_path datasets/cmrc2018/dev.json \ --output_model_path models/extractive_qa_model.bin \ --learning_rate 3e-5 --batch_size 32 --epochs_num 3 \ --embedding word_pos_seg --encoder transformer --mask fully_visible ``` Finally, we convert the fine-tuned model into Huggingface's format: ``` python3 scripts/convert_roberta_extractive_qa_from_uer_to_huggingface.py --input_model_path extractive_qa_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 12 ``` ### BibTeX entry and citation info ``` @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ```