Update README.md
Browse files
README.md
CHANGED
@@ -22,10 +22,13 @@ pipeline_tag: text-generation
|
|
22 |
LORA adapters of `meta-llama/Meta-Llama-3.1-8B-Instruct`, trained on 100 context samples from the HotpotQA dataset using the RAFT method, enable the model to better reason through the context and return more accurate outcomes.
|
23 |
|
24 |
### Evaluation
|
25 |
-
|
|
|
26 |
|
27 |
|
28 |
| type | exatch_match| f1 | precision | recall |
|
29 |
|--------------|-------------|------------|---------------|---------|
|
30 |
-
| pretrained | 0.
|
31 |
-
| finetuned | 0.
|
|
|
|
|
|
22 |
LORA adapters of `meta-llama/Meta-Llama-3.1-8B-Instruct`, trained on 100 context samples from the HotpotQA dataset using the RAFT method, enable the model to better reason through the context and return more accurate outcomes.
|
23 |
|
24 |
### Evaluation
|
25 |
+
|
26 |
+
Evaluated on FULL validation set of HotpotQA.
|
27 |
|
28 |
|
29 |
| type | exatch_match| f1 | precision | recall |
|
30 |
|--------------|-------------|------------|---------------|---------|
|
31 |
+
| pretrained | 0.2980 | 0.3979 | 0.4116 | 0.5263 |
|
32 |
+
| finetuned | 0.3606 | **0.4857** | 0.4989 | 0.5318 |
|
33 |
+
|
34 |
+
Finetuned version increases **22% on F1 and 15% on average**
|