Limerobot commited on
Commit
3713916
1 Parent(s): d0a36b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -49,17 +49,17 @@ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`
49
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
50
 
51
  ### Main Results
52
- | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
53
- |-----------------------------------------------|---------|-------|-----------|-------|------------|
54
- | **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | **69.7** | **61.6** |
55
- | Llama-2-70b-instruct (Ours, Local Reproduction) | 72.0 | 70.7 | 87.4 | 69.3 | 60.7 |
56
- | llama-65b-instruct (Ours, Local Reproduction) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 |
57
- | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 |
58
- | llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
59
- | llama-30b-instruct-2048 (Ours, Local Reproduction) | 67.0 | 64.9 | 85.0 | 61.9 | 56.0 |
60
- | llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
61
- | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 |
62
- | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
63
 
64
  ### Scripts
65
  - Prepare evaluation environments:
 
49
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
50
 
51
  ### Main Results
52
+ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
53
+ |-----------------------------------------------|---------|-------|-----------|-------|------------|-------|----------|
54
+ | **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | **69.7** | **61.6** | | 7.440625 |
55
+ | Llama-2-70b-instruct (Ours, Local Reproduction) | 72.0 | 70.7 | 87.4 | 69.3 | 60.7 | | 7.24375 |
56
+ | llama-65b-instruct (Ours, Local Reproduction) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
57
+ | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
58
+ | llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
59
+ | llama-30b-instruct-2048 (Ours, Local Reproduction) | 67.0 | 64.9 | 85.0 | 61.9 | 56.0 | | 6.88125 |
60
+ | llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
61
+ | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
62
+ | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
63
 
64
  ### Scripts
65
  - Prepare evaluation environments: