File size: 1,497 Bytes
e4aab92 02dcdaa e4aab92 02dcdaa 7f80a3d 926cd48 22e5665 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# Starling-LM-10.7B-beta
This is Starling-LM-10.7B-beta, a depth-upscaled version of [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta).
This model is intended to be used as a drop-in upgrade from the original 7 billion parameter model.
# GGUF quantizations (courtesy of bartowski)
See [bartowski/Starling-LM-10.7B-beta-GGUF](https://huggingface.co/bartowski/Starling-LM-10.7B-beta-GGUF)
# ExLlamaV2 quantizations (courtesy of [blockblockblock](https://huggingface.co/blockblockblock))
- [2.5 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw2.5)
- [3 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw3)
- [3.5 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw3.5)
- [3.7 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw3.7)
- [4 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw4)
- [4.4 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw4.4)
- [4.6 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw4.6)
- [4.8 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw4.8)
- [5 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw5)
- [5.5 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw5.5)
- [6 bpw](https://huggingface.co/blockblockblock/Starling-LM-10.7B-beta-bpw6) |