xianchaowu commited on
Commit
fd30dcd
1 Parent(s): fbe421b

better mmlu score

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ license: llama2
8
 
9
  0. using the updated [Meta's LLaMA-2 models](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
10
  1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving;
11
- 2. comparable MMLU evaluation dataset results, llama2-7b's 0.453 to our 0.4795 (+0.0265).
12
 
13
  ### Introduction
14
  Determine the rank of LoRA layers by the singular values of pretrained weight matrices.
 
8
 
9
  0. using the updated [Meta's LLaMA-2 models](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
10
  1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving;
11
+ 2. better MMLU evaluation dataset results, llama2-7b's 45.3% to our 47.95% (+2.65%).
12
 
13
  ### Introduction
14
  Determine the rank of LoRA layers by the singular values of pretrained weight matrices.