haritzpuerto commited on
Commit
9444c2b
1 Parent(s): 125d0a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -10,6 +10,7 @@ library_name: adapter-transformers
10
  pipeline_tag: question-answering
11
  ---
12
 
 
13
  This is the MADE encoder model created by Friedman et al. (2021). This encoder should be used along with the following dataset-specific adapters.
14
  - https://huggingface.co/UKP-SQuARE/MADE_HotpotQA_Adapter
15
  - https://huggingface.co/UKP-SQuARE/MADE_TriviaQA_Adapter
@@ -18,6 +19,11 @@ This is the MADE encoder model created by Friedman et al. (2021). This encoder s
18
  - https://huggingface.co/UKP-SQuARE/MADE_NewsQA_Adapter
19
  - https://huggingface.co/UKP-SQuARE/MADE_NaturalQuestions_Adapter
20
 
 
 
 
 
 
21
  Friedman et al. (2021) reported the following results:
22
 
23
  - SQuAD v1.1: 92.4
@@ -29,10 +35,7 @@ Friedman et al. (2021) reported the following results:
29
  - Avg: 82.2
30
 
31
 
32
- The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE
33
-
34
-
35
  Please refer to the original publication for more information.
36
 
37
- Citation:
38
  Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)
 
10
  pipeline_tag: question-answering
11
  ---
12
 
13
+ # Description
14
  This is the MADE encoder model created by Friedman et al. (2021). This encoder should be used along with the following dataset-specific adapters.
15
  - https://huggingface.co/UKP-SQuARE/MADE_HotpotQA_Adapter
16
  - https://huggingface.co/UKP-SQuARE/MADE_TriviaQA_Adapter
 
19
  - https://huggingface.co/UKP-SQuARE/MADE_NewsQA_Adapter
20
  - https://huggingface.co/UKP-SQuARE/MADE_NaturalQuestions_Adapter
21
 
22
+ The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE
23
+
24
+
25
+
26
+ # Evaluation Results
27
  Friedman et al. (2021) reported the following results:
28
 
29
  - SQuAD v1.1: 92.4
 
35
  - Avg: 82.2
36
 
37
 
 
 
 
38
  Please refer to the original publication for more information.
39
 
40
+ # Citation
41
  Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)