abhijithneilabraham commited on
Commit
cf362a9
1 Parent(s): 3054086

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -1,2 +1,26 @@
1
  Covid 19 question answering data obtained from [covid_qa_deepset](https://huggingface.co/datasets/covid_qa_deepset).
2
- Repository for the fine tuning, inference and evaluation scripts can be found [here](https://github.com/abhijithneilabraham/Covid-QA)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  Covid 19 question answering data obtained from [covid_qa_deepset](https://huggingface.co/datasets/covid_qa_deepset).
2
+ Repository for the fine tuning, inference and evaluation scripts can be found [here](https://github.com/abhijithneilabraham/Covid-QA).
3
+
4
+ ```
5
+ import torch
6
+ from transformers import AutoTokenizer, AutoModelForQuestionAnswering
7
+
8
+ tokenizer = AutoTokenizer.from_pretrained("abhijithneilabraham/longformer_covid_qa")
9
+ model = AutoModelForQuestionAnswering.from_pretrained("abhijithneilabraham/longformer_covid_qa")
10
+
11
+ text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
12
+ question = "What has Huggingface done ?"
13
+ encoding = tokenizer(question, text, return_tensors="pt")
14
+ input_ids = encoding["input_ids"]
15
+
16
+ # default is local attention everywhere
17
+ # the forward method will automatically set global attention on question tokens
18
+ attention_mask = encoding["attention_mask"]
19
+
20
+ start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
21
+ all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
22
+
23
+ answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
24
+ answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
25
+ # output => democratized NLP
26
+ ```