MaziyarPanahi commited on
Commit
3e180ca
1 Parent(s): 6ae7140

Upload folder using huggingface_hub (#1)

Browse files

- 0cc1be1ad253f26af65ee3fb39af35cf71d65858377066515ef6b199d0327bca (6a7d83d9f4ff4edbeb7252011471730ba148fd2e)
- 486e0204101383633f155db6a6c88d11f7162d3b824dceed91d0f7c395c8639f (93ed121acd7a720329263ec96f7ee11543cec4bf)
- 0644ce68213a7f7a156607a678fad67b7849c665596f4e25b4e3be806588744c (7c5c6a3f38727af14c6098e31d87ed1f80e66752)
- db3aa113052491490f1840dcd7edf38568799b26218fd6d86a961bf808787555 (c91925cf4beb0240e14b7667ff57c80f35782ed0)
- cdc029be10a3e73c84180e9d1533ec68c22e089922fdf1a2f1159c0024e52d4a (48b2c6b23ec6ef5ac6151208ad812de478eea2de)
- 3cc86de348fd3d0cbc70bceadc0936bdfce414ffb020a5e9513a06c8f6e47b6c (e46769b47fef1dc783a95dabe603f3ab368dec05)
- 0c2a2fff3f397024601ed15e9413dbd70844b93b2cc085d97c15a809ef75fae2 (816a511e8e9ee553082ca7b0d8a4e32d4444217b)
- 1cf8346b43f89b5d4e756db6eb67f985a735c8f379cb3736d0d8cc492b4c548b (af980ef84e9c628fd556b7b35b97845e9d0dbabc)
- 8269b071554f8f31caee247de84f5636de2d03497680684051feb861889268f3 (1bdfce3fba0342f05edea0b184b5da5bbd516401)
- a45f9258891fee97440cdeeef017ace2a162854e24bf27cb982ca9e9f2b651ee (989c2e5aa96155e42aadd110832388d3f43d1111)
- a1fade3c58f4d0e480d89c11432c0a328beb9ea4db2c00cc48b7acffe7d98cb7 (c7b70a0283f6b960665bfc2409495404c5b7cc63)
- c5c5e199bdcd342ca68e2875c62f5f5c043af210de67988fd69ec1567e096185 (ba0e0d49e442f1b35a98e12903836e4daeca6c3f)

.gitattributes CHANGED
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Inex12M7-7B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Inex12M7-7B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Inex12M7-7B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Inex12M7-7B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Inex12M7-7B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Inex12M7-7B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Inex12M7-7B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Inex12M7-7B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Inex12M7-7B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Inex12M7-7B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Inex12M7-7B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
Inex12M7-7B.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c804717cf91835751cced1918c68a25ba3fd25dd83872fbd37f58e4fbae487a
3
+ size 2719242080
Inex12M7-7B.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16927b22c03c4d35cca921c25cecdfe8817358a2394cc50445c7a0bef7e370ff
3
+ size 3822024544
Inex12M7-7B.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fd95979d8b494af582a5def6532ce6f8bfcb6f1f7a2172801c2c8f57cdfb412
3
+ size 3518986080
Inex12M7-7B.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87936b95cf5750b4fa5f09e5b3737367c2d91aa3a30c4fcce8ff74b65b1de2eb
3
+ size 3164567392
Inex12M7-7B.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f4ab798027c6b38d6ca1e4fbea36147d8baf4d6d720f83e4a04b1e6d1f5a354
3
+ size 4368439136
Inex12M7-7B.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7dec706ead0feb323cf630a22a402cc644fdb375a8913df391eac216755fcc5
3
+ size 4140373856
Inex12M7-7B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61deb0728a33ccaa78e43e028d22e5953553cee9760b57d988877d8a12312347
3
+ size 5131409248
Inex12M7-7B.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd725cf8db58a0a829ae0883357008fdbef48d4158bb6d3b9cfd027d75779caf
3
+ size 4997715808
Inex12M7-7B.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30e0449067274487794e770faa85ff9d2a1fe7185311eeba44facd28552861e3
3
+ size 5942064992
Inex12M7-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5e2fe89b0eb4354e5027d0724402570870cec12bb58704140455e818fdb284b
3
+ size 7695857504
Inex12M7-7B.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3da031b299ac662c2ce5e18ffa73a4a39da74f893040d48778566845a21f2fbd
3
+ size 14484731744
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - transformers
12
+ - safetensors
13
+ - mistral
14
+ - text-generation
15
+ - merge
16
+ - mergekit
17
+ - lazymergekit
18
+ - automerger
19
+ - base_model:liminerity/M7-7b
20
+ - license:apache-2.0
21
+ - autotrain_compatible
22
+ - endpoints_compatible
23
+ - text-generation-inference
24
+ - region:us
25
+ - text-generation
26
+ model_name: Inex12M7-7B-GGUF
27
+ base_model: automerger/Inex12M7-7B
28
+ inference: false
29
+ model_creator: automerger
30
+ pipeline_tag: text-generation
31
+ quantized_by: MaziyarPanahi
32
+ ---
33
+ # [MaziyarPanahi/Inex12M7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12M7-7B-GGUF)
34
+ - Model creator: [automerger](https://huggingface.co/automerger)
35
+ - Original model: [automerger/Inex12M7-7B](https://huggingface.co/automerger/Inex12M7-7B)
36
+
37
+ ## Description
38
+ [MaziyarPanahi/Inex12M7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12M7-7B-GGUF) contains GGUF format model files for [automerger/Inex12M7-7B](https://huggingface.co/automerger/Inex12M7-7B).
39
+
40
+ ### About GGUF
41
+
42
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
43
+
44
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
45
+
46
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
47
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
48
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
49
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
50
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
51
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
52
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
53
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
54
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
55
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
56
+
57
+ ## Special thanks
58
+
59
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.