letrunglinh commited on
Commit
42114fb
1 Parent(s): be492ae

Upload 6 files

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - vi
4
+ - vn
5
+ - en
6
+ tags:
7
+ - question-answering
8
+ - pytorch
9
+ datasets:
10
+ - squad
11
+ license: cc-by-nc-4.0
12
+ pipeline_tag: question-answering
13
+ metrics:
14
+ - squad
15
+ widget:
16
+ - text: "Bình là chuyên gia về gì ?"
17
+ context: "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
18
+ - text: "Bình được công nhận với danh hiệu gì ?"
19
+ context: "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
20
+ ---
21
+ ## Model Description
22
+
23
+ - Language model: [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)
24
+ - Fine-tune: [MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc)
25
+ - Language: Vietnamese, Englsih
26
+ - Downstream-task: Extractive QA
27
+ - Dataset (combine English and Vietnamese):
28
+ - [Squad 2.0](https://rajpurkar.github.io/SQuAD-explorer/)
29
+ - [mailong25](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset)
30
+ - [UIT-ViQuAD](https://www.aclweb.org/anthology/2020.coling-main.233/)
31
+ - [MultiLingual Question Answering](https://github.com/facebookresearch/MLQA)
32
+
33
+ This model is intended to be used for QA in the Vietnamese language so the valid set is Vietnamese only (but English works fine). The evaluation result below using 10% of the Vietnamese dataset.
34
+
35
+
36
+ | Model | EM | F1 |
37
+ | ------------- | ------------- | ------------- |
38
+ | [base](https://huggingface.co/nguyenvulebinh/vi-mrc-base) | 76.43 | 84.16 |
39
+ | [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) | 77.32 | 85.46 |
40
+
41
+
42
+ [MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc) using [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html) as a pre-trained language model. By default, XLM-RoBERTa will split word in to sub-words. But in my implementation, I re-combine sub-words representation (after encoded by BERT layer) into word representation using sum strategy.
43
+
44
+ ## Using pre-trained model
45
+
46
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Yqgdfaca7L94OyQVnq5iQq8wRTFvVZjv?usp=sharing)
47
+
48
+ - Hugging Face pipeline style (**NOT using sum features strategy**).
49
+
50
+ ```python
51
+ from transformers import pipeline
52
+ # model_checkpoint = "nguyenvulebinh/vi-mrc-large"
53
+ model_checkpoint = "nguyenvulebinh/vi-mrc-base"
54
+ nlp = pipeline('question-answering', model=model_checkpoint,
55
+ tokenizer=model_checkpoint)
56
+ QA_input = {
57
+ 'question': "Bình là chuyên gia về gì ?",
58
+ 'context': "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
59
+ }
60
+ res = nlp(QA_input)
61
+ print('pipeline: {}'.format(res))
62
+ #{'score': 0.5782045125961304, 'start': 45, 'end': 68, 'answer': 'xử lý ngôn ngữ tự nhiên'}
63
+ ```
64
+
65
+ - More accurate infer process ([**Using sum features strategy**](https://github.com/nguyenvulebinh/extractive-qa-mrc))
66
+
67
+ ```python
68
+ from infer import tokenize_function, data_collator, extract_answer
69
+ from model.mrc_model import MRCQuestionAnswering
70
+ from transformers import AutoTokenizer
71
+
72
+ # model_checkpoint = "nguyenvulebinh/vi-mrc-large"
73
+ model_checkpoint = "nguyenvulebinh/vi-mrc-base"
74
+ tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
75
+ model = MRCQuestionAnswering.from_pretrained(model_checkpoint)
76
+
77
+ QA_input = {
78
+ 'question': "Bình được công nhận với danh hiệu gì ?",
79
+ 'context': "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
80
+ }
81
+
82
+ inputs = [tokenize_function(*QA_input)]
83
+ inputs_ids = data_collator(inputs)
84
+ outputs = model(**inputs_ids)
85
+ answer = extract_answer(inputs, outputs, tokenizer)
86
+
87
+ print(answer)
88
+ # answer: Google Developer Expert. Score start: 0.9926977753639221, Score end: 0.9909810423851013
89
+ ```
90
+
91
+ ## About
92
+
93
+ *Built by Binh Nguyen*
94
+ [![Follow](https://img.shields.io/twitter/follow/nguyenvulebinh?style=social)](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
95
+ For more details, visit the project repository.
96
+ [![GitHub stars](https://img.shields.io/github/stars/nguyenvulebinh/extractive-qa-mrc?style=social)](https://github.com/nguyenvulebinh/extractive-qa-mrc)
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "xlm-roberta-base",
3
+ "architectures": [
4
+ "XLMRobertaForQuestionAnswering"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.2,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "transformers_version": "4.8.2",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 250002
27
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdbc882f499b80c61acc5ddc84e91dfaec12d13847b211431e76fb36a67011d6
3
+ size 1112263149
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "xlm-roberta-base", "tokenizer_class": "XLMRobertaTokenizer"}