system HF staff commited on
Commit
9f49e55
1 Parent(s): ac56034

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - pytorch
5
+ - question-answering
6
+ datasets:
7
+ - squad2
8
+ metrics:
9
+ - exact
10
+ - f1
11
+ widget:
12
+ - text: "What discipline did Winkelmann create?"
13
+ context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art."
14
+ ---
15
+
16
+ # bert-base-finetuned-squad2
17
+
18
+ ## Model description
19
+
20
+ This model is based on [bert-base](https://huggingface.co/bert-base) and was finetuned on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/). The corresponding papers you can found [here (model)](https://arxiv.org/abs/1810.04805) and [here (data)](https://arxiv.org/abs/1806.03822).
21
+
22
+
23
+ ## How to use
24
+
25
+ ```python
26
+ from transformers.pipelines import pipeline
27
+
28
+ model_name = "phiyodr/bert-base-finetuned-squad2"
29
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
30
+ inputs = {
31
+ 'question': 'What discipline did Winkelmann create?',
32
+ 'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
33
+ }
34
+ nlp(inputs)
35
+ ```
36
+
37
+
38
+
39
+ ## Training procedure
40
+
41
+ ```
42
+ {
43
+ "base_model": "bert-base",
44
+ "do_lower_case": True,
45
+ "learning_rate": 3e-5,
46
+ "num_train_epochs": 4,
47
+ "max_seq_length": 384,
48
+ "doc_stride": 128,
49
+ "max_query_length": 64,
50
+ "batch_size": 96
51
+ }
52
+ ```
53
+
54
+ ## Eval results
55
+
56
+ - Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
57
+ - Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md))
58
+
59
+ ```
60
+ {
61
+ "exact": 70.3950138970774,
62
+ "f1": 73.90527661873521,
63
+ "total": 11873,
64
+ "HasAns_exact": 71.4574898785425,
65
+ "HasAns_f1": 78.48808186475087,
66
+ "HasAns_total": 5928,
67
+ "NoAns_exact": 69.33557611438184,
68
+ "NoAns_f1": 69.33557611438184,
69
+ "NoAns_total": 5945
70
+ }
71
+ ```
72
+