shuyuej commited on
Commit
d673cdd
1 Parent(s): e5cc18f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -16
README.md CHANGED
@@ -1,21 +1,21 @@
1
  ---
2
  model-index:
3
- - name: MetaMath-LoRA-LLaMA-7B
4
- results:
5
- - task:
6
- type: text-generation
7
- dataset:
8
- name: meta-math/MetaMathQA
9
- type: meta-math/MetaMathQA
10
- metrics:
11
- - name: Accuracy (zero-shot)
12
- type: Accuracy (zero-shot)
13
- value: 0.635
14
- verified: true
15
- source:
16
- name: Arithmetic Reasoning on GSM8K
17
- url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k
18
- license: mit
19
  ---
20
 
21
  # Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA
@@ -29,3 +29,10 @@ Invalid output length: 3, Testing length: 1319, **Accuracy: 0.635**
29
  The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs.
30
 
31
  **Note**: The LoRA adapter is being used for future research purposes.
 
 
 
 
 
 
 
 
1
  ---
2
  model-index:
3
+ - name: MetaMath-LoRA-LLaMA-7B
4
+ results:
5
+ - task:
6
+ type: text-generation
7
+ dataset:
8
+ name: meta-math/MetaMathQA
9
+ type: meta-math/MetaMathQA
10
+ metrics:
11
+ - name: Accuracy (zero-shot)
12
+ type: Accuracy (zero-shot)
13
+ value: 0.635
14
+ verified: true
15
+ source:
16
+ name: Arithmetic Reasoning on GSM8K
17
+ url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k
18
+ license: apache-2.0
19
  ---
20
 
21
  # Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA
 
29
  The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs.
30
 
31
  **Note**: The LoRA adapter is being used for future research purposes.
32
+
33
+ # 🚀 Adapter Usage
34
+ ```python
35
+ # Load the Pre-trained LoRA Adapter
36
+ model.load_adapter("shuyuej/metamath_lora_llama2_7b_2_epoch")
37
+ model.enable_adapters()
38
+ ```