File size: 2,549 Bytes
4cb8b14
89b782b
4cb8b14
 
89b782b
 
 
4cb8b14
89b782b
 
 
 
4cb8b14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49ffe3f
72cb7d6
3b0ab08
fc7cfdd
4b48f4d
c045bec
c45a2a5
4cb8b14
 
 
 
 
49ffe3f
72cb7d6
3b0ab08
fc7cfdd
4b48f4d
c045bec
c45a2a5
4cb8b14
 
 
 
 
 
 
 
 
 
89b782b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: apache-2.0
pipeline_tag: question-answering
tags:
- question-answering
- transformers
- generated_from_trainer
datasets:
- squad_v2
- LLukas22/nq-simplified
language:
- en
---

# all-MiniLM-L12-v2-qa-en
This model is an extractive qa model.
It's a fine-tuned version of [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the following datasets: [squad_v2](https://huggingface.co/datasets/squad_v2), [LLukas22/nq-simplified](https://huggingface.co/datasets/LLukas22/nq-simplified).



## Usage

You can use the model like this:

```python
from transformers import pipeline

#Make predictions
model_name = "LLukas22/all-MiniLM-L12-v2-qa-en"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)

QA_input = {
    "question": "What's my name?",
    "context": "My name is Clara and I live in Berkeley."
}

result = nlp(QA_input)
print(result)
```
Alternatively you can load the model and tokenizer on their own:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer

#Make predictions
model_name = "LLukas22/all-MiniLM-L12-v2-qa-en"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```

## Training hyperparameters
The following hyperparameters were used during training:

- learning_rate: 2E-05
- per device batch size: 60
- effective batch size: 180
- seed: 42
- optimizer: AdamW with betas (0.9,0.999) and eps 1E-08
- weight decay: 1E-02
- D-Adaptation: False
- Warmup: False
- number of epochs: 10
- mixed_precision_training: bf16

## Training results
| Epoch | Train Loss | Validation Loss |
| ----- | ---------- | --------------- |
| 0 | 2.65 | 1.88 |
| 1 | 1.83 | 1.74 |
| 2 | 1.69 | 1.69 |
| 3 | 1.63 | 1.68 |
| 4 | 1.6 | 1.67 |
| 5 | 1.58 | 1.66 |
| 6 | 1.57 | 1.66 |
| 7 | 1.57 | 1.66 |

## Evaluation results
| Epoch | f1 | exact_match |
| ----- | ----- | ----- |
| 0 | 0.507 | 0.378 |
| 1 | 0.53 | 0.418 |
| 2 | 0.544 | 0.431 |
| 3 | 0.552 | 0.429 |
| 4 | 0.557 | 0.439 |
| 5 | 0.561 | 0.438 |
| 6 | 0.564 | 0.441 |
| 7 | 0.566 | 0.441 |

## Framework versions
- Transformers: 4.25.1
- PyTorch: 2.0.0+cu118
- PyTorch Lightning: 1.8.6
- Datasets: 2.7.1
- Tokenizers: 0.13.1
- Sentence Transformers: 2.2.2

## Additional Information
This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Retrieval-Augmented-QA).