techysanoj commited on
Commit
b61451f
1 Parent(s): 3c136f8

create read me file

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ datasets:
4
+ - squad
5
+ - squad_v2
6
+ language:
7
+ - en
8
+ - hi
9
+ metrics:
10
+ - accuracy
11
+ pipeline_tag: question-answering
12
+ ---
13
+
14
+
15
+ # roberta-fine-tuned-squadv2 for QA
16
+
17
+ This is the [roberta-base](https://huggingface.co/techysanoj/roberta-fine-tuned-squadv2-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
18
+
19
+
20
+ ## Overview
21
+ **Language model:** roberta-fine-tuned-squadv2
22
+ **Language:** English, Hindi(Upcoming)
23
+ **Downstream-task:** Extractive QA
24
+ **Training data:** SQuAD 2.0
25
+ **Eval data:** SQuAD 2.0
26
+ **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
27
+ **Infrastructure**: 4x Tesla v100
28
+
29
+ ## Hyperparameters
30
+
31
+ ```
32
+ batch_size = 4
33
+ n_epochs = 50
34
+ base_LM_model = "roberta-base"
35
+ max_seq_len = 512
36
+ learning_rate = 9e-5
37
+ lr_schedule = LinearWarmup
38
+ warmup_proportion = 0.2
39
+ doc_stride=128
40
+ max_query_length=64
41
+ ```
42
+
43
+ ## Usage
44
+
45
+ ### In Haystack
46
+ Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
47
+ ```python
48
+ reader = FARMReader(model_name_or_path="techysanoj/roberta-fine-tuned-squadv2-large")
49
+ # or
50
+ reader = TransformersReader(model_name_or_path="techysanoj/roberta-fine-tuned-squadv2-large",tokenizer="deepset/roberta-base-squad2")
51
+ ```
52
+ For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
53
+
54
+ ### In Transformers
55
+ ```python
56
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
57
+
58
+ model_name = "techysanoj/roberta-fine-tuned-squadv2-large"
59
+
60
+ # a) Get predictions
61
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
62
+ QA_input = {
63
+ 'question': 'Why is model conversion important?',
64
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
65
+ }
66
+ res = nlp(QA_input)
67
+
68
+ # b) Load model & tokenizer
69
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
70
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
71
+ ```
72
+
73
+ ## Performance
74
+ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
75
+
76
+ ```
77
+ "exact": 79.87029394424324,
78
+ "f1": 82.91251169582613,
79
+
80
+ "total": 11873,
81
+ "HasAns_exact": 77.93522267206478,
82
+ "HasAns_f1": 84.02838248389763,
83
+ "HasAns_total": 5928,
84
+ "NoAns_exact": 81.79983179142137,
85
+ "NoAns_f1": 81.79983179142137,
86
+ "NoAns_total": 5945
87
+ ```
88
+
89
+ ## Authors
90
+ **Shashwat Bindal:** optimus.coders.@ai
91
+ **Sanoj:** optimus.coders.@ai
92
+
93
+ ## About us
94
+
95
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
96
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
97
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
98
+ </div>
99
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
100
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
101
+ </div>
102
+ </div>
103
+
104
+ [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
105
+
106
+
107
+ Some of our other work:
108
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
109
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
110
+ - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
111
+
112
+ ## Get in touch and join the Haystack community
113
+
114
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
115
+
116
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
117
+
118
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
119
+
120
+ By the way: [we're hiring!](http://www.deepset.ai/jobs)