MaziyarPanahi commited on
Commit
55cc814
1 Parent(s): 92af1d4

Upload folder using huggingface_hub (#1)

Browse files

- 4000b3a3f408ac469dfb84be3aef821a86af8da46f5f8edce05f9031ff5aaac2 (916e15011fb5638c5a50e8ad61a42278bac330a3)
- b594a995383d99f208df140a06ace75eba88bdcc6415e7a4f844232ed0b1ab2a (b3178dd9fa65664143a037ecd1fa0d9359027b26)
- d93b0c12724ca4ee82f19e6eff8755acf046f9aab374d0bac8951150ba9ce3bb (ad224eda3a7d4247f34abfb5fcdc2c21e87cc796)
- 9aae87a3075565f9ceaa76bf1d115eb5d063116a3f0a4925c4d83228cd8ebc2b (de04c6fcc3d46e96e1e42382116759911b5c6405)
- 5d044e683f567c26e7290b1dd665d3a4a5d1b78763cd22637547af938ebf9776 (cf2c3b62981793dd119ea9f01116f401e058f00a)
- 4c8c056f74e9ed487262c681daee8a5554eab6b2752c43ae84eff31dba8f955b (c22dfea48d2c23844732f913b6e0197bc65709b6)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen2.5-7B-CyberRombos.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen2.5-7B-CyberRombos.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen2.5-7B-CyberRombos.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen2.5-7B-CyberRombos.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen2.5-7B-CyberRombos.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen2.5-7B-CyberRombos-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-CyberRombos-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ee5a99780b68f4e4cc54c206388371035a47a85d6725681e891d5be33f25140
3
+ size 4536654
Qwen2.5-7B-CyberRombos.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60c708a9a834897f04373a19d56f0cc7d7a7687930b06a8daf3b826d293e0c76
3
+ size 5445440768
Qwen2.5-7B-CyberRombos.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bbf025ef7c47fbafde68122c83ff3f1d8db929bfb06036f21c250ed683424a0
3
+ size 5315785984
Qwen2.5-7B-CyberRombos.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dca3b0434778e993dd52cbec7e859784e17e9b52bb3aaa8f364ca7692a6a140c
3
+ size 6254808320
Qwen2.5-7B-CyberRombos.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7a429e8fd8e6818530d455755aa24580e083813f1dc88c106443eda7025aaec
3
+ size 8099134720
Qwen2.5-7B-CyberRombos.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1deda4940bdaf767b5fe0cc247155ded5f6202564a5feefdafdb4e5cd50f0631
3
+ size 15238462464
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: Qwen2.5-7B-CyberRombos-GGUF
14
+ base_model: bunnycore/Qwen2.5-7B-CyberRombos
15
+ inference: false
16
+ model_creator: bunnycore
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/Qwen2.5-7B-CyberRombos-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-CyberRombos-GGUF)
21
+ - Model creator: [bunnycore](https://huggingface.co/bunnycore)
22
+ - Original model: [bunnycore/Qwen2.5-7B-CyberRombos](https://huggingface.co/bunnycore/Qwen2.5-7B-CyberRombos)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/Qwen2.5-7B-CyberRombos-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-CyberRombos-GGUF) contains GGUF format model files for [bunnycore/Qwen2.5-7B-CyberRombos](https://huggingface.co/bunnycore/Qwen2.5-7B-CyberRombos).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.