mgoin shubhrapandit commited on
Commit
4ee2327
1 Parent(s): 771dd70

Update README.md (#1)

Browse files

- Update README.md (1ba5ef59ba8ffd55220d4240e019ddfe9a9c8b52)


Co-authored-by: Shubhra Pandit <shubhrapandit@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -50,17 +50,17 @@ Model evaluation metrics and results.
50
 
51
  | Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned50-retrained-instruct |
52
  |------------------------------------------------|---------------|-------------|-------------------------------|
53
- | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | xxxx | xxxx |
54
- | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | xxxx | xxxx |
55
- | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | xxxx | xxxx |
56
- | [ARC-c](https://arxiv.org/abs/1911.01547) | | xxxx | xxxx |
57
- | [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | xxxx | xxxx |
58
- | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | xxxx | xxxx |
59
- | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | xxxx | xxxx |
60
 
61
  ## Model Training Details
62
 
63
- Coming soon.
 
64
 
65
  ## Help
66
 
 
50
 
51
  | Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned50-retrained-instruct |
52
  |------------------------------------------------|---------------|-------------|-------------------------------|
53
+ | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 48.60% | 45.10% |
54
+ | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 79.45% | 78.86% |
55
+ | [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 75.69% | 72.61% |
56
+ | [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 53.92% | 50.77% |
57
+ | [TruthfulQA](https://arxiv.org/abs/2109.07958) | 0-shot | 43.63% | 44.40% |
58
+ | [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 15.92% | 16.38% |
 
59
 
60
  ## Model Training Details
61
 
62
+ This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned50-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) on a blend of [Open Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), 10% [Open Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and 10% [Dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin) datasets.
63
+ Training was perfomerd for 2 epochs.
64
 
65
  ## Help
66