yxgeee ganyk commited on
Commit
7115e71
1 Parent(s): 7a2b468

Update README.md (#4)

Browse files

- Update README.md (499ff2a812cdf1e81f373a5d86ee9f0ac8044aaf)


Co-authored-by: YukangGan <ganyk@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -16,6 +16,25 @@ This model is designed for a wide range of NLP tasks, with a focus on programmin
16
  ## Performance
17
  LLaMA-Pro demonstrates advanced performance across various benchmarks. It outperforms existing models in the LLaMA series in handling diverse tasks, showcasing its capability as an intelligent language agent.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Limitations
20
  While LLaMA-Pro addresses some limitations of previous models in the series, it may still encounter challenges specific to highly specialized domains or tasks.
21
 
 
16
  ## Performance
17
  LLaMA-Pro demonstrates advanced performance across various benchmarks. It outperforms existing models in the LLaMA series in handling diverse tasks, showcasing its capability as an intelligent language agent.
18
 
19
+ ### Overall Performance on Languages, math and code tasks
20
+
21
+ | Model | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K | GSM8K-PoT | HumanEval | MBPP | Avg |
22
+ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
23
+ | LLAMA PRO (8B) | 54.10 | 77.94 | 47.88 | 39.04 | 73.95 | 17.89 | 25.42 | 28.66 | 33.20 | 44.2 |
24
+ | LLaMA2-7B | 53.07 | 78.59 | 46.87 | 38.76 | 74.03 | 14.48 | 17.68 | 13.05 | 20.09 | 39.62 |
25
+ | CodeLLaMA-7B | 39.93 | 60.80 | 31.12 | 37.82 | 64.01 | 5.16 | 25.20 | 33.50 | 41.40 | 37.66 |
26
+ | LLAMA PRO-INSTRUCT | 52.30 | 76.88 | 52.57 | 48.80 | 72.53 | 43.59 | 55.61 | 44.51 | 37.88 | 53.8 |
27
+
28
+ ### Performance on GPT4 Evaluation
29
+
30
+ | Model | MT Bench |
31
+ | :-: | :-: |
32
+ | Alpaca-13B | 4.53 |
33
+ | CodeLLaMA-7B-Instruct | 5.71 |
34
+ | Vicuna-7B | 6.17 |
35
+ | LLaMA2-7B-Chat | 6.27 |
36
+ | LLAMA PRO-INSTRUCT | 6.32 |
37
+
38
  ## Limitations
39
  While LLaMA-Pro addresses some limitations of previous models in the series, it may still encounter challenges specific to highly specialized domains or tasks.
40