morriszms commited on
Commit
5ccac8e
1 Parent(s): e607b5e

Upload folder using huggingface_hub

Browse files
Qwen2.5-1.5B-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ac7416a3a1a00e9fd2edd23660c127327fbaa2ed7c5a3812efd767444019ae0
3
- size 676304480
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdb8f17121e82aa37837eb736f97bc03f57026e9ba53b0dfcd44a0eb9420338b
3
+ size 676302208
Qwen2.5-1.5B-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:10e32029941cde892ff85a37fbc0d83a53db1feb414bd440041dc7671258f7bb
3
- size 880162400
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:191088f53df465fed211256cb09b79abdc6f719839a72f3665b50962be37f62b
3
+ size 880160128
Qwen2.5-1.5B-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d063ca6943f89e198f9a64d311e2a78ee25fbf65941be6f4e097cb85b403905d
3
- size 824178272
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9401afd76ec98e97ed3e96a8e2dcc779410fb62507759a502c12238c02cbb918
3
+ size 824176000
Qwen2.5-1.5B-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f46af0f655cb17606542a1b5810204803ec6d748bce4e51ab81d818dee718f3
3
- size 760944224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51608e760134359ab6e7c589cd5ce19dedb638fb49596cb8e7e8b1f9a627838f
3
+ size 760941952
Qwen2.5-1.5B-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f94cd357f6f6cdf0d881690b1a5dffe5519bda03f35bf72e2d6b959fd0098419
3
- size 934954592
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7ef4e8d4b92236ac613446a82f73ccc912e9c71984ff63911d4979efd7ba3c7
3
+ size 934952320
Qwen2.5-1.5B-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:09fa1777cf2d2d83b373522ce6b731ef019e7b7631654512f4446f4747b4a38a
3
- size 986048096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce7e0f679e7c9ee3d0a452f16843a510f269ec5171807060cdc4f392e7a94cbe
3
+ size 986045824
Qwen2.5-1.5B-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f0498180a937cf778212dde6f41a97f78f2b53b894b0bf4b12c65e44a379278
3
- size 940312160
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b1f513502b7d9f3e5e8558454a9500f61100821da6445fe62ea13459827dfea
3
+ size 940309888
Qwen2.5-1.5B-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:78a870821acd09fb0ab11e41bd4beca2f6787d949d9a801e5d015f7b9d008d06
3
- size 1098729056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3202221366f689c694dd39161f3ef07d7afb72d025a1b605eae145c59fe29f63
3
+ size 1098726784
Qwen2.5-1.5B-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:699ebaf03bb2db6208dbbbe0aeb202a6133a0f4707bb1aa68381ccf92c7ec303
3
- size 1125049952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f283952c9dda418096cc752b860dee3c54d09c95436003376856962083ae1258
3
+ size 1125047680
Qwen2.5-1.5B-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6fbd702ff62309e046f0ed577d7af528536db627f2acf90fc3fff3aaa335b91d
3
- size 1098729056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd99f336bd5e43cbebc3859037537ca04ecec403e62308be69ff761bcaf59863
3
+ size 1098726784
Qwen2.5-1.5B-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cc92af57d484bc83f795a7c8a87b2b564dbdb444f0a69b50fe64ac45942b447
3
- size 1272739424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2141393bde581b249db04c1edaeea8e04bc9a5586cc634eec84fdebaa4f24b5e
3
+ size 1272737152
Qwen2.5-1.5B-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab7bb0abd41c283df2bcfaf4fd8749b7aae6a71a05aa3f2518189f1a56d09a9a
3
- size 1646572640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a517c4e36996e1c87758d014d09e5c4c7536acdaa1c1801c98f348a1d6a1b25
3
+ size 1646570368
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
4
  language:
5
  - en
6
- pipeline_tag: text-generation
7
  library_name: transformers
 
8
  tags:
 
 
9
  - TensorBlock
10
  - GGUF
11
- base_model: Qwen/Qwen2.5-1.5B
12
  ---
13
 
14
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -22,13 +22,12 @@ base_model: Qwen/Qwen2.5-1.5B
22
  </div>
23
  </div>
24
 
25
- ## Qwen/Qwen2.5-1.5B - GGUF
26
 
27
- This repo contains GGUF format model files for [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
28
 
29
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
30
 
31
-
32
  <div style="text-align: left; margin: 20px 0;">
33
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
34
  Run them on the TensorBlock client using your local machine ↗
@@ -37,31 +36,26 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
37
 
38
  ## Prompt template
39
 
40
-
41
  ```
42
- <|im_start|>system
43
- {system_prompt}<|im_end|>
44
- <|im_start|>user
45
- {prompt}<|im_end|>
46
- <|im_start|>assistant
47
  ```
48
 
49
  ## Model file specification
50
 
51
  | Filename | Quant type | File Size | Description |
52
  | -------- | ---------- | --------- | ----------- |
53
- | [Qwen2.5-1.5B-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q2_K.gguf) | Q2_K | 0.630 GB | smallest, significant quality loss - not recommended for most purposes |
54
- | [Qwen2.5-1.5B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q3_K_S.gguf) | Q3_K_S | 0.709 GB | very small, high quality loss |
55
- | [Qwen2.5-1.5B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q3_K_M.gguf) | Q3_K_M | 0.768 GB | very small, high quality loss |
56
- | [Qwen2.5-1.5B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q3_K_L.gguf) | Q3_K_L | 0.820 GB | small, substantial quality loss |
57
- | [Qwen2.5-1.5B-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q4_0.gguf) | Q4_0 | 0.871 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
58
- | [Qwen2.5-1.5B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q4_K_S.gguf) | Q4_K_S | 0.876 GB | small, greater quality loss |
59
- | [Qwen2.5-1.5B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q4_K_M.gguf) | Q4_K_M | 0.918 GB | medium, balanced quality - recommended |
60
- | [Qwen2.5-1.5B-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q5_0.gguf) | Q5_0 | 1.023 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
61
- | [Qwen2.5-1.5B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q5_K_S.gguf) | Q5_K_S | 1.023 GB | large, low quality loss - recommended |
62
- | [Qwen2.5-1.5B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q5_K_M.gguf) | Q5_K_M | 1.048 GB | large, very low quality loss - recommended |
63
- | [Qwen2.5-1.5B-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q6_K.gguf) | Q6_K | 1.185 GB | very large, extremely low quality loss |
64
- | [Qwen2.5-1.5B-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q8_0.gguf) | Q8_0 | 1.533 GB | very large, extremely low quality loss - not recommended |
65
 
66
 
67
  ## Downloading instruction
 
1
  ---
2
+ base_model: unsloth/Qwen2.5-1.5B
 
3
  language:
4
  - en
 
5
  library_name: transformers
6
+ license: apache-2.0
7
  tags:
8
+ - unsloth
9
+ - transformers
10
  - TensorBlock
11
  - GGUF
 
12
  ---
13
 
14
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
22
  </div>
23
  </div>
24
 
25
+ ## unsloth/Qwen2.5-1.5B - GGUF
26
 
27
+ This repo contains GGUF format model files for [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B).
28
 
29
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
30
 
 
31
  <div style="text-align: left; margin: 20px 0;">
32
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
33
  Run them on the TensorBlock client using your local machine ↗
 
36
 
37
  ## Prompt template
38
 
 
39
  ```
40
+
 
 
 
 
41
  ```
42
 
43
  ## Model file specification
44
 
45
  | Filename | Quant type | File Size | Description |
46
  | -------- | ---------- | --------- | ----------- |
47
+ | [Qwen2.5-1.5B-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes |
48
+ | [Qwen2.5-1.5B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss |
49
+ | [Qwen2.5-1.5B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss |
50
+ | [Qwen2.5-1.5B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss |
51
+ | [Qwen2.5-1.5B-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
52
+ | [Qwen2.5-1.5B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss |
53
+ | [Qwen2.5-1.5B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended |
54
+ | [Qwen2.5-1.5B-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q5_0.gguf) | Q5_0 | 1.099 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
55
+ | [Qwen2.5-1.5B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q5_K_S.gguf) | Q5_K_S | 1.099 GB | large, low quality loss - recommended |
56
+ | [Qwen2.5-1.5B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended |
57
+ | [Qwen2.5-1.5B-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q6_K.gguf) | Q6_K | 1.273 GB | very large, extremely low quality loss |
58
+ | [Qwen2.5-1.5B-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-GGUF/blob/main/Qwen2.5-1.5B-Q8_0.gguf) | Q8_0 | 1.647 GB | very large, extremely low quality loss - not recommended |
59
 
60
 
61
  ## Downloading instruction