PremalMatalia commited on
Commit
2ed088a
1 Parent(s): 784c84c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - squad_v2
4
+ ---
5
+
6
+ # ELECTRA-base for QA
7
+
8
+ ## Overview
9
+ **Language model:** deepset/electra-base-squad2 </br>
10
+ **Language:** English </br>
11
+ **Downstream-task:** Extractive QA </br>
12
+ **Training data:** SQuAD 2.0 </br>
13
+ **Eval data:** SQuAD 2.0 </br>
14
+ **Code:** <TBD> </br>
15
+
16
+ ## Env Information
17
+ `transformers` version: 4.9.1 </br>
18
+ Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
19
+ Python version: 3.7.11 </br>
20
+ PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
21
+ Tensorflow version (GPU?): 2.5.0 (False)</br>
22
+
23
+ ## Hyperparameters
24
+ ```
25
+ max_seq_len=386
26
+ doc_stride=128
27
+ n_best_size=20
28
+ max_answer_length=30
29
+ min_null_score=7.0
30
+ batch_size=8
31
+
32
+ n_epochs=2
33
+ base_LM_model = "deepset/electra-base-squad2"
34
+ learning_rate=1.5e-5
35
+ adam_epsilon=1e-5
36
+ adam_beta1=0.95
37
+ adam_beta2=0.999
38
+ warmup_steps=100
39
+ weight_decay=0.01
40
+ optimizer=AdamW
41
+ lr_scheduler="polynomial"
42
+ ```
43
+ ##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
44
+
45
+ ## Performance
46
+ ```
47
+ "exact": 79.331256
48
+ "f1": 83.232347
49
+ "total": 11873
50
+ "HasAns_exact": 76.501350
51
+ "HasAns_f1": 84.314719
52
+ "HasAns_total": 5928
53
+ "NoAns_exact": 82.153070
54
+ "NoAns_f1": 82.153070
55
+ "NoAns_total": 5945
56
+ ```
57
+
58
+ ## Usage
59
+ ### In Transformers
60
+ ```python
61
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
62
+
63
+ model_name = "PremalMatalia/electra-base-best-squad2"
64
+
65
+ # a) Get predictions
66
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
67
+ QA_input = {
68
+ 'question': 'Which name is also used to describe the Amazon rainforest in English?',
69
+ 'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
70
+ }
71
+ res = nlp(QA_input)
72
+ print(res)
73
+
74
+ # b) Load model & tokenizer
75
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
76
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
77
+ ```
78
+
79
+ ## Authors
80
+ Premal Matalia