MaziyarPanahi commited on
Commit
725f76f
·
verified ·
1 Parent(s): 80aae9d

Upload folder using huggingface_hub (#1)

Browse files

- e221950f6966fe0bca80032f2e830b26b196058d7440d5a9b96156d2685df00a (c16f9990158ff8137de41863a8cb3817f9ea8d4f)
- c488433aada06ad323ac1286911108aebed171769ea1e8b8bec260e3aa14c97d (f7378294015297b85d6347dd3f13633eadcbe96f)
- bac36ef1944854f9acb07c292e16c58c03664f496f55367732621455ae239c1a (cccc1c159c2b16cdd221357ced3f418d1f6b21e6)
- e61e3fc840e308201b9096fe96789be7c1bf8f5fe7fea52c495db33dcbb90eae (b637cf5cf1111d0be1210eebc053ec1bd7f8e692)
- affadb807fdb3572fdd99a9ff11f81227ff9cd8a4e477b60b76729c4561e96d7 (57951b42e489e7589273579208dbd6eb3e2fa763)
- 2bb89f102ad23cf9d01e52136bc8041799e80835673ee789f75f2af0738c6ae2 (b976000064ece979250f175410f6ce681b0a4fe0)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mistral-Nemo-Prism-12B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mistral-Nemo-Prism-12B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mistral-Nemo-Prism-12B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mistral-Nemo-Prism-12B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mistral-Nemo-Prism-12B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mistral-Nemo-Prism-12B-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Mistral-Nemo-Prism-12B-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6855715d16e003e7e2e6a5ffbbe45a84896d90ec8075605d112665afe9599f6
3
+ size 7054394
Mistral-Nemo-Prism-12B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02eac7c172251604daa12186565fdd2560cc6cdc50720bd8002ec35e04f9ae16
3
+ size 8727631712
Mistral-Nemo-Prism-12B.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fae4753044eeb12061927f24da09fe66b394e70b1a3f2527bfd9d916726d3e9
3
+ size 8518735712
Mistral-Nemo-Prism-12B.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14a9a626a251769c5ed78a8daa0985b7b70ba925030f23f5169973d39d2cc4a6
3
+ size 10056210272
Mistral-Nemo-Prism-12B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95dcf293567a1e1d5f98450c6b10204c772b953d8a5506e2114457db0a919e3d
3
+ size 13022369632
Mistral-Nemo-Prism-12B.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b4445b970fe12401969a05973a81bb0a85941bdb33cb50be1533645a3d305a4
3
+ size 24504276608
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: nbeerbower/Mistral-Nemo-Prism-12B
3
+ inference: false
4
+ model_creator: nbeerbower
5
+ model_name: Mistral-Nemo-Prism-12B-GGUF
6
+ pipeline_tag: text-generation
7
+ quantized_by: MaziyarPanahi
8
+ tags:
9
+ - quantized
10
+ - 2-bit
11
+ - 3-bit
12
+ - 4-bit
13
+ - 5-bit
14
+ - 6-bit
15
+ - 8-bit
16
+ - GGUF
17
+ - text-generation
18
+ ---
19
+ # [MaziyarPanahi/Mistral-Nemo-Prism-12B-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Nemo-Prism-12B-GGUF)
20
+ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
21
+ - Original model: [nbeerbower/Mistral-Nemo-Prism-12B](https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B)
22
+
23
+ ## Description
24
+ [MaziyarPanahi/Mistral-Nemo-Prism-12B-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Nemo-Prism-12B-GGUF) contains GGUF format model files for [nbeerbower/Mistral-Nemo-Prism-12B](https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B).
25
+
26
+ ### About GGUF
27
+
28
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
29
+
30
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
31
+
32
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
33
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
34
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
35
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
36
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
37
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
38
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
39
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
40
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
41
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
42
+
43
+ ## Special thanks
44
+
45
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.