MaziyarPanahi commited on
Commit
c706d84
·
verified ·
1 Parent(s): 06885ba

Upload folder using huggingface_hub (#1)

Browse files

- 34c3209e4931b392ddbf111e134f59413e1caae8a6d67e35cdc7a2f32c786f95 (dc7c4c96bdf717f8f70025022a7ca03c8963bbb5)
- d91bef26c35519f00c3a2e4efb12e2781e1e2642574e804e4b56fe366e638133 (63853b40a4a3be2dac3e0b30a5833509ecd46898)
- bfa2625b0583787129477469c6be8ad31c7749b84c078bb17f57dfaec7d90af1 (48e5c75fc32217dd4cb1e689aff8957d34562eef)
- a74753d8e28e50801eaf374590bf776743f53e0eef07a28e486f2ae838849fea (a9cef1fcef4e6b86ed4cb1c917db5c041fcb2b13)
- 9a20a4b70b7d1bdafbd409cd262f5901accdf71a8b1ecde9c5df459c11610184 (1d64ab49f3a8402a24a119ee023d2d3dae94b264)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen2.5-Coder-1.5B-CodeFIM-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
37
+ Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen2.5-Coder-1.5B-CodeFIM.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen2.5-Coder-1.5B-CodeFIM.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen2.5-Coder-1.5B-CodeFIM.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-Coder-1.5B-CodeFIM-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd5417c7e4c7b6ac305b3722690efad42f75d2444f46654fd925f884d6d51be5
3
+ size 2042190
Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb2625deb5ea3128b31516ad758669d8e8f9c8a5d96998bfe01bb2e4d988d0ef
3
+ size 1125050112
Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6794936468dc41a58d6975b56c7c302f071dea68004a303e8aca93efd1de2fe
3
+ size 1098729216
Qwen2.5-Coder-1.5B-CodeFIM.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da1fb9f56d4ac71116dbfe095a39fa078b9429505f6e0f8fd6f7cc6e6b41e28c
3
+ size 1272739584
Qwen2.5-Coder-1.5B-CodeFIM.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:010cddb0ee4620aa570eb706230d88baf0de870ab517f07254d48ae332144520
3
+ size 1646572800
Qwen2.5-Coder-1.5B-CodeFIM.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dda13aa06e65b5ccdae460a6bdec2b78942a93ac4093e7b952efa7e5e10006b9
3
+ size 3093668896
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: Qwen2.5-Coder-1.5B-CodeFIM-GGUF
14
+ base_model: Etherll/Qwen2.5-Coder-1.5B-CodeFIM
15
+ inference: false
16
+ model_creator: Etherll
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/Qwen2.5-Coder-1.5B-CodeFIM-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-Coder-1.5B-CodeFIM-GGUF)
21
+ - Model creator: [Etherll](https://huggingface.co/Etherll)
22
+ - Original model: [Etherll/Qwen2.5-Coder-1.5B-CodeFIM](https://huggingface.co/Etherll/Qwen2.5-Coder-1.5B-CodeFIM)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/Qwen2.5-Coder-1.5B-CodeFIM-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-Coder-1.5B-CodeFIM-GGUF) contains GGUF format model files for [Etherll/Qwen2.5-Coder-1.5B-CodeFIM](https://huggingface.co/Etherll/Qwen2.5-Coder-1.5B-CodeFIM).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.