MaziyarPanahi commited on
Commit
a5e1abc
1 Parent(s): 215061f

Upload folder using huggingface_hub (#1)

Browse files

- 0264b550f7fd645e696fe0c5ecd4beb73942434a7a7a00725d5000f3466c6593 (14cab2dd52f16109f148598d8055d7296a0d5b6c)
- fba0fa4d82b78e846c5323e07afb8504712163a8f210d8fd549cbbf3d9ce953c (d3a764e6bbbe328c1a4d7871a059d47da36d8967)
- ab17d4f9923d68dcfc181b5cdb62b4cbd4704b053beea0ee9d794d11c29dbf96 (59ac5f22dcf215a86ba807db08ce39608fac9a14)
- b9b2921bdba4eee200d8c602f6aa460b4b3acd21f03b8eaa784b40dd0eb74e63 (e0b04b25a951df11d7d6cd702032d3726fd7ee50)
- ed975c2241b81654fb26bd7df48a7e755c50a899b199d3dae6f50b655f6de9fe (3eb782e5a7040747d93d4d46b6e448b9ca4011d5)
- 828ffff536b04d72ba2eb31cd39d6a875072d49809c03b917f08f3925d054a53 (eab01d7229afac0e62374c2c1e39841c9625a0e2)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ MistraMystic.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ MistraMystic.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ MistraMystic.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ MistraMystic.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ MistraMystic.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ MistraMystic-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
MistraMystic-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7303655929c17d04a6fa93d639207d9d1b95b9898aec98c5f08f7effaadb290e
3
+ size 4988146
MistraMystic.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b20767b3e73299c980680c312e1f0c420296f621b0c0a8d6818e041671a7544
3
+ size 5136175616
MistraMystic.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d00cd519b1dd1a87b56d8f36c6f46ad55c14917df46e1ee476f650be27345bbf
3
+ size 5002482176
MistraMystic.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:551d327025e6b4232b21ac3b6f30b1acbb12a09878eca812e185d66a5f330533
3
+ size 5947249152
MistraMystic.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70b6d2f19c5275e8651a928cef1bf34d3f7aa81a1ebe7fccae163785f30bd45a
3
+ size 7702565376
MistraMystic.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7348f0f3122ae172986ac6e9523c941de47eee09cffa42653c6ad1082f30e164
3
+ size 14497337632
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: MistraMystic-GGUF
14
+ base_model: choco58/MistraMystic
15
+ inference: false
16
+ model_creator: choco58
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/MistraMystic-GGUF](https://huggingface.co/MaziyarPanahi/MistraMystic-GGUF)
21
+ - Model creator: [choco58](https://huggingface.co/choco58)
22
+ - Original model: [choco58/MistraMystic](https://huggingface.co/choco58/MistraMystic)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/MistraMystic-GGUF](https://huggingface.co/MaziyarPanahi/MistraMystic-GGUF) contains GGUF format model files for [choco58/MistraMystic](https://huggingface.co/choco58/MistraMystic).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.