|
--- |
|
library_name: transformers |
|
license: llama3.1 |
|
datasets: |
|
- phatvo/hotpotqa-raft-dev-100 |
|
metrics: |
|
- f1 |
|
- exact_match |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
LORA adapters of `meta-llama/Meta-Llama-3.1-8B-Instruct`, trained on 100 context samples from the HotpotQA dataset using the RAFT method, enable the model to better reason through the context and return more accurate outcomes. |
|
|
|
### Evaluation |
|
|
|
Evaluated on FULL validation set of HotpotQA. |
|
|
|
|
|
| type | exatch_match| f1 | precision | recall | |
|
|--------------|-------------|------------|---------------|---------| |
|
| pretrained | 0.2980 | 0.3979 | 0.4116 | 0.5263 | |
|
| finetuned | 0.3606 | **0.4857** | 0.4989 | 0.5318 | |
|
|
|
Finetuned version increases **22% on F1 and 15% on average** |