Merged-AGI-7B-EXL2 / README.md
hgloow's picture
Update README.md
33b1b37
|
raw
history blame
No virus
1.69 kB
---
license: cc-by-nc-4.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- Math
---
## EXL2 Quants
Quantization
- [3.0bpw](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/3.0bpw)
- [4.0bpw (main)](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/main)
- [6.0bpw](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/6.0bpw)
- [8.0bpw](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/8.0bpw)
Zipped Quantization (if you want to download a single file)
- [3.0bpw](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/3.0bpw)
- [4.0bpw (main)](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/main)
- [6.0bpw](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/6.0bpw)
- [8.0bpw](https://huggingface.co/hgloow/Merged-AGI-7B-EXL2/tree/8.0bpw)
## Merged-AGI-7B
Merge [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) and [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) using slerp merge.
You can use ChatML format.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [Coming soon]()
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | Coming soon |
| ARC (25-shot) | Coming soon |
| HellaSwag (10-shot) | Coming soon |
| MMLU (5-shot) | Coming soon |
| TruthfulQA (0-shot) | Coming soon |
| Winogrande (5-shot) | Coming soon |
| GSM8K (5-shot) | Coming soon |