mgoin alexmarques commited on
Commit
8f25c2f
1 Parent(s): 03f1419

Update README.md (#1)

Browse files

- Update README.md (52996374c18f1470ac4b0c7689305702ebbc6930)


Co-authored-by: Alexandre Marques <alexmarques@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -49,17 +49,20 @@ Model evaluation metrics and results.
49
 
50
  | Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned50-retrained-ultrachat |
51
  |------------------------------------------------|---------------|-------------|-------------------------------|
52
- | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | xxxx | xxxx |
53
- | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | xxxx | xxxx |
54
- | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | xxxx | xxxx |
55
- | [ARC-c](https://arxiv.org/abs/1911.01547) | | xxxx | xxxx |
56
- | [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | xxxx | xxxx |
57
- | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | xxxx | xxxx |
58
- | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | xxxx | xxxx |
 
 
59
 
60
  ## Model Training Details
61
 
62
- Coming soon.
 
63
 
64
  ## Help
65
 
 
49
 
50
  | Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned50-retrained-ultrachat |
51
  |------------------------------------------------|---------------|-------------|-------------------------------|
52
+ | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 46.1% | 41.4% |
53
+ | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 75.9% | 73.5% |
54
+ | [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 72.6% | 67.8% |
55
+ | [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 52.8% | 49.0% |
56
+ | [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | 44.8% | 39.5% |
57
+ | [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 12.4% | 8.0% |
58
+ | [AlpacaEval](https://arxiv.org/abs/2107.03374) ([Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) evaluator) | Win rate | 57.6% | 60.1% |
59
+ | [AlpacaEval](https://arxiv.org/abs/2107.03374) (GPT-4 Turbo evaluator) | Win rate | 60.6% | 59.0% |
60
+
61
 
62
  ## Model Training Details
63
 
64
+ This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned50-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
65
+ Training was perfomerd for 2 epochs and used the [SquareHead](https://arxiv.org/abs/2310.06927) knowledge distillation with [Llama-2-7b-ultrachat](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat) as teacher.
66
 
67
  ## Help
68