abhilash1910
commited on
Commit
•
76c4be4
1
Parent(s):
ecdf8b9
Create Readme
Browse files
README.md
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Albert Transformer on SQuAD-v2
|
2 |
+
|
3 |
+
Training is done on the [SQuAD_v2](https://rajpurkar.github.io/SQuAD-explorer/) dataset. The model can be accessed via HuggingFace:
|
4 |
+
|
5 |
+
## Model Specifications
|
6 |
+
|
7 |
+
We have used the following parameters:
|
8 |
+
|
9 |
+
- num_train_epochs=0.25,
|
10 |
+
- per_device_train_batch_size=5,
|
11 |
+
- per_device_eval_batch_size=10,
|
12 |
+
- warmup_steps=100,
|
13 |
+
- weight_decay=0.01,
|
14 |
+
|
15 |
+
## Usage Specifications
|
16 |
+
|
17 |
+
```python
|
18 |
+
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
|
19 |
+
from transformers import pipeline
|
20 |
+
model=AutoModelForQuestionAnswering.from_pretrained('abhilash1910/albert-squad-v2')
|
21 |
+
tokenizer=AutoTokenizer.from_pretrained('abhilash1910/albert-squad-v2')
|
22 |
+
nlp_QA=pipeline('question-answering',model=model,tokenizer=tokenizer)
|
23 |
+
QA_inp={
|
24 |
+
'question': 'How many parameters does Bert large have?',
|
25 |
+
'context': 'Bert large is really big... it has 24 layers, for a total of 340M parameters.Altogether it is 1.34 GB so expect it to take a couple minutes to download to your Colab instance.'
|
26 |
+
}
|
27 |
+
result=nlp_QA(QA_inp)
|
28 |
+
result
|
29 |
+
```
|
30 |
+
## Result
|
31 |
+
|
32 |
+
The result is:
|
33 |
+
|
34 |
+
|
35 |
+
{'answer': '340M', 'end': 65, 'score': 0.14847151935100555, 'start': 61}
|