Upload folder using huggingface_hub
#1
by
MaziyarPanahi
- opened
- .gitattributes +11 -0
- Inex12Multiverseex26-7B.Q2_K.gguf +3 -0
- Inex12Multiverseex26-7B.Q3_K_L.gguf +3 -0
- Inex12Multiverseex26-7B.Q3_K_M.gguf +3 -0
- Inex12Multiverseex26-7B.Q3_K_S.gguf +3 -0
- Inex12Multiverseex26-7B.Q4_K_M.gguf +3 -0
- Inex12Multiverseex26-7B.Q4_K_S.gguf +3 -0
- Inex12Multiverseex26-7B.Q5_K_M.gguf +3 -0
- Inex12Multiverseex26-7B.Q5_K_S.gguf +3 -0
- Inex12Multiverseex26-7B.Q6_K.gguf +3 -0
- Inex12Multiverseex26-7B.Q8_0.gguf +3 -0
- Inex12Multiverseex26-7B.fp16.gguf +3 -0
- README.md +60 -0
.gitattributes
CHANGED
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Inex12Multiverseex26-7B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Inex12Multiverseex26-7B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Inex12Multiverseex26-7B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Inex12Multiverseex26-7B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Inex12Multiverseex26-7B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Inex12Multiverseex26-7B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Inex12Multiverseex26-7B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Inex12Multiverseex26-7B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Inex12Multiverseex26-7B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Inex12Multiverseex26-7B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Inex12Multiverseex26-7B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
Inex12Multiverseex26-7B.Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f448eb70336267b4bf799602128da84594084f7ed2e67a16c7ca7bd348263e35
|
3 |
+
size 2719242080
|
Inex12Multiverseex26-7B.Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:949a1ebab70b2d9e12721cd622f85d3351d69d13e2fd81fd6cc16d3d022f66a7
|
3 |
+
size 3822024544
|
Inex12Multiverseex26-7B.Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3ff98289efb0facf92632cdd6f1ff6c25abbaf2930a28c79381701411248f855
|
3 |
+
size 3518986080
|
Inex12Multiverseex26-7B.Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa7e5f1689745b05714ea2d3b623d56aaca619797f126804ec1b64e8bc3888ec
|
3 |
+
size 3164567392
|
Inex12Multiverseex26-7B.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69906f244827196041ddccfc10103d0371d1330abbae99d92080131f434c69c6
|
3 |
+
size 4368439136
|
Inex12Multiverseex26-7B.Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:edc4ed1005bc982c1d3f8a37021c0dad512b5a3c8694db14ccef76299acd2faf
|
3 |
+
size 4140373856
|
Inex12Multiverseex26-7B.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4782f88f042b90349ae559a7ba5f9cd5d858c6853c6afa76f13602c21b96d1e4
|
3 |
+
size 5131409248
|
Inex12Multiverseex26-7B.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:97d0cefe47130aa928387465fc2e73788756d559f551c61052f57834bbf27a3e
|
3 |
+
size 4997715808
|
Inex12Multiverseex26-7B.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eb76f4457815208e1a704ac01e1cea5992253748a12fa8d4d0adfc767f44857d
|
3 |
+
size 5942064992
|
Inex12Multiverseex26-7B.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b8b4d40ad06609c8f8f304517659f1d7070149d99f055242d9022566f8c4001a
|
3 |
+
size 7695857504
|
Inex12Multiverseex26-7B.fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8559ebcacbc0a74d2afd899e3630d7ce05804c419921c7382b6f8eb9f5316d7f
|
3 |
+
size 14484731744
|
README.md
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 2-bit
|
5 |
+
- 3-bit
|
6 |
+
- 4-bit
|
7 |
+
- 5-bit
|
8 |
+
- 6-bit
|
9 |
+
- 8-bit
|
10 |
+
- GGUF
|
11 |
+
- transformers
|
12 |
+
- safetensors
|
13 |
+
- mistral
|
14 |
+
- text-generation
|
15 |
+
- merge
|
16 |
+
- mergekit
|
17 |
+
- lazymergekit
|
18 |
+
- automerger
|
19 |
+
- base_model:MSL7/INEX12-7b
|
20 |
+
- base_model:allknowingroger/MultiverseEx26-7B-slerp
|
21 |
+
- license:apache-2.0
|
22 |
+
- autotrain_compatible
|
23 |
+
- endpoints_compatible
|
24 |
+
- text-generation-inference
|
25 |
+
- region:us
|
26 |
+
- text-generation
|
27 |
+
model_name: Inex12Multiverseex26-7B-GGUF
|
28 |
+
base_model: automerger/Inex12Multiverseex26-7B
|
29 |
+
inference: false
|
30 |
+
model_creator: automerger
|
31 |
+
pipeline_tag: text-generation
|
32 |
+
quantized_by: MaziyarPanahi
|
33 |
+
---
|
34 |
+
# [MaziyarPanahi/Inex12Multiverseex26-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12Multiverseex26-7B-GGUF)
|
35 |
+
- Model creator: [automerger](https://huggingface.co/automerger)
|
36 |
+
- Original model: [automerger/Inex12Multiverseex26-7B](https://huggingface.co/automerger/Inex12Multiverseex26-7B)
|
37 |
+
|
38 |
+
## Description
|
39 |
+
[MaziyarPanahi/Inex12Multiverseex26-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12Multiverseex26-7B-GGUF) contains GGUF format model files for [automerger/Inex12Multiverseex26-7B](https://huggingface.co/automerger/Inex12Multiverseex26-7B).
|
40 |
+
|
41 |
+
### About GGUF
|
42 |
+
|
43 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
44 |
+
|
45 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
46 |
+
|
47 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
48 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
49 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
50 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
51 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
52 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
53 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
54 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
55 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
56 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
57 |
+
|
58 |
+
## Special thanks
|
59 |
+
|
60 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|