chargoddard commited on
Commit
ae96f79
1 Parent(s): da911da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -4,6 +4,10 @@ datasets:
4
  - JeanKaddour/minipile
5
  language:
6
  - en
 
 
 
 
7
  ---
8
 
9
  Meta's Llama 3 70B pruned to 42B parameters using the methodology described in [The Unreasonable Ineffectiveness of the Deeper Layers](https://arxiv.org/abs/2403.17887). Post-pruning trained using QLoRA for ~100M tokens from [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile).
@@ -29,4 +33,6 @@ Still evaluating, don't get too excited! Might be incredibly dumb. Check out the
29
  | - humanities |N/A |none | 5|acc |0.7296|± |0.0062|
30
  | - other |N/A |none | 5|acc |0.8101|± |0.0067|
31
  | - social_sciences|N/A |none | 5|acc |0.8668|± |0.0060|
32
- | - stem |N/A |none | 5|acc |0.6825|± |0.0079|
 
 
 
4
  - JeanKaddour/minipile
5
  language:
6
  - en
7
+ tags:
8
+ - axolotl
9
+ - mergekit
10
+ - llama
11
  ---
12
 
13
  Meta's Llama 3 70B pruned to 42B parameters using the methodology described in [The Unreasonable Ineffectiveness of the Deeper Layers](https://arxiv.org/abs/2403.17887). Post-pruning trained using QLoRA for ~100M tokens from [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile).
 
33
  | - humanities |N/A |none | 5|acc |0.7296|± |0.0062|
34
  | - other |N/A |none | 5|acc |0.8101|± |0.0067|
35
  | - social_sciences|N/A |none | 5|acc |0.8668|± |0.0060|
36
+ | - stem |N/A |none | 5|acc |0.6825|± |0.0079|
37
+
38
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)