Update README.md
Browse files
README.md
CHANGED
@@ -1,21 +1,21 @@
|
|
1 |
---
|
2 |
model-index:
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
license:
|
19 |
---
|
20 |
|
21 |
# Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA
|
@@ -29,3 +29,10 @@ Invalid output length: 3, Testing length: 1319, **Accuracy: 0.635**
|
|
29 |
The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs.
|
30 |
|
31 |
**Note**: The LoRA adapter is being used for future research purposes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
model-index:
|
3 |
+
- name: MetaMath-LoRA-LLaMA-7B
|
4 |
+
results:
|
5 |
+
- task:
|
6 |
+
type: text-generation
|
7 |
+
dataset:
|
8 |
+
name: meta-math/MetaMathQA
|
9 |
+
type: meta-math/MetaMathQA
|
10 |
+
metrics:
|
11 |
+
- name: Accuracy (zero-shot)
|
12 |
+
type: Accuracy (zero-shot)
|
13 |
+
value: 0.635
|
14 |
+
verified: true
|
15 |
+
source:
|
16 |
+
name: Arithmetic Reasoning on GSM8K
|
17 |
+
url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k
|
18 |
+
license: apache-2.0
|
19 |
---
|
20 |
|
21 |
# Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA
|
|
|
29 |
The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs.
|
30 |
|
31 |
**Note**: The LoRA adapter is being used for future research purposes.
|
32 |
+
|
33 |
+
# 🚀 Adapter Usage
|
34 |
+
```python
|
35 |
+
# Load the Pre-trained LoRA Adapter
|
36 |
+
model.load_adapter("shuyuej/metamath_lora_llama2_7b_2_epoch")
|
37 |
+
model.enable_adapters()
|
38 |
+
```
|