sjrhuschlee commited on
Commit
05c4045
1 Parent(s): 52d472e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md CHANGED
@@ -1,3 +1,78 @@
1
  ---
 
2
  license: cc-by-4.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  license: cc-by-4.0
4
+ tags:
5
+ - flan
6
+ - flan-t5
7
+ datasets:
8
+ - squad_v2
9
+ - squad
10
  ---
11
+ # flan-t5-xl for Extractive QA
12
+
13
+ This is the [flan-t5-xl](https://huggingface.co/google/flan-t5-xl) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
14
+
15
+
16
+ ## Overview
17
+ **Language model:** flan-t5-xl
18
+ **Language:** English
19
+ **Downstream-task:** Extractive QA
20
+ **Training data:** SQuAD 2.0
21
+ **Eval data:** SQuAD 2.0
22
+ **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
23
+
24
+ ## Hyperparameters
25
+
26
+ ```
27
+ n_epochs = 4
28
+ ```
29
+
30
+ ## Usage
31
+
32
+ ### In Haystack
33
+ Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do extractive question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
34
+ ```python
35
+ # NOTE: This only works with Haystack v2.0!
36
+ reader = ExtractiveReader(model_name_or_path="deepset/flan-t5-xl-squad2")
37
+ ```
38
+
39
+ ### In Transformers
40
+ ```python
41
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
42
+
43
+ model_name = "deepset/flan-t5-xl-squad2"
44
+
45
+ # a) Get predictions
46
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
47
+ QA_input = {
48
+ 'question': 'Why is model conversion important?',
49
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
50
+ }
51
+ res = nlp(QA_input)
52
+
53
+ # b) Load model & tokenizer
54
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
55
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
56
+ ```
57
+
58
+
59
+ ## About us
60
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
61
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
62
+ <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
63
+ </div>
64
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
65
+ <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
66
+ </div>
67
+ </div>
68
+
69
+ [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
70
+
71
+
72
+ ## Get in touch and join the Haystack community
73
+
74
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
75
+
76
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
77
+
78
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)