morriszms commited on
Commit
dea6c57
·
verified ·
1 Parent(s): 3361219

Upload folder using huggingface_hub

Browse files
Qwen2.5-0.5B-Instruct-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6debf55d12a1b9f2a1f65766086241bd4bf7700ed15445a5a67f70056d867562
3
- size 338607360
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a817b744a4e38b35b45d79963a418090df3c6509aee8527bc3522aebc8b3bba
3
+ size 338607424
Qwen2.5-0.5B-Instruct-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e2b09d67797fd9d8a5b9582fcd83a9a6260b264573aad00292c6dfdbcf37dd1
3
- size 369358080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbc918c2194c3beeb3c7c140aaa50842639b0616752c7588aa7f2d565a368cc3
3
+ size 369358144
Qwen2.5-0.5B-Instruct-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9604e4b9047e7ad9edeb69a8bd5d3c8dc71d4b29f11f86f725a26bddbf55832e
3
- size 355466496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8cd24438ac853e76935965776bd974730cedbee06b2982a0e584c38e9ef7762
3
+ size 355466560
Qwen2.5-0.5B-Instruct-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:28e16230d4801343ffbd5406c8baa8462ebdc9b38c7022fb17860bff8cd9efee
3
- size 338263296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff78797a7255c7a53ba82de50167814c57b9a2571214bc4c62a2ee1944d458df
3
+ size 338263360
Qwen2.5-0.5B-Instruct-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d961a5928deb1d2ece8514c460e77e3bb1e0ff50079729416d4e364bc715b7a5
3
- size 352154880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:335a326740145578a8a4485100db976438a6e39472507ef2c3af20f9cf908bf3
3
+ size 352154944
Qwen2.5-0.5B-Instruct-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:28deef462cebff16d91711ba808066b708eb01f8862492d20534b669ec9a4e9a
3
- size 397807872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7bd431103afa86b420b2bd1bde3bf540d15e6182662d1d91ecae2bfae43d76a
3
+ size 397807936
Qwen2.5-0.5B-Instruct-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d67bd93f00b9c7163031e547a433b4420c3595a5ee696423d1d1cc84d884fa7a
3
- size 385471744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76134fea563d1bd5bb52b154d7563c7977042f12bfe8d44c83f1d9b742cc386e
3
+ size 385471808
Qwen2.5-0.5B-Instruct-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:000cd5711efa18680874df06e83120066affc286e7a73d36987c1294b8a5b72f
3
- size 396883200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7fcf7bc2905e05d75cca9034ac663aa385d485a63eaca0c973162e804c4dda6
3
+ size 396883264
Qwen2.5-0.5B-Instruct-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:905513d15b0e0adb9547bcf908f0c213d42e6a98ce4c724317f1ab6941939768
3
- size 420086016
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c86b7d500da9f703945690194032a73ee42512c032f29803d34258335f637df
3
+ size 420086080
Qwen2.5-0.5B-Instruct-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:defc49d1b38603faa94ad41c3ac26fb13bba73edf0e7cceff8fa77390a9ce27a
3
- size 412710144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b8ecb0d7696ea8a5efdc85582562b22c5375e2e1007d52c62a1ec683386ab62
3
+ size 412710208
Qwen2.5-0.5B-Instruct-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a8b5590ffd802240c48e61723bdb514d1a92362446e947655d5293cde77c418
3
- size 505736448
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a19f7ee9afebe000a01aab67aecfaa77403a60ab52b4e9f557be4cefac3638a
3
+ size 505736512
Qwen2.5-0.5B-Instruct-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:39d58e2efaae220fc2a31a1acbb50b51574f65bccc7e497e66f7ade70dde22c3
3
- size 531068160
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bc34e09bb0b23b7dbc0ee3d52f99c227c28fbd604e10ce6e40d42a7a1f618a0
3
+ size 531068224
README.md CHANGED
@@ -1,14 +1,15 @@
1
  ---
2
- base_model: unsloth/Qwen2.5-0.5B-Instruct
 
3
  language:
4
  - en
5
- library_name: transformers
6
- license: apache-2.0
7
  tags:
8
- - unsloth
9
- - transformers
10
  - TensorBlock
11
  - GGUF
 
12
  ---
13
 
14
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -22,12 +23,11 @@ tags:
22
  </div>
23
  </div>
24
 
25
- ## unsloth/Qwen2.5-0.5B-Instruct - GGUF
26
-
27
- This repo contains GGUF format model files for [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
28
 
29
- The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
30
 
 
31
 
32
  <div style="text-align: left; margin: 20px 0;">
33
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
@@ -37,7 +37,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
37
 
38
  ## Prompt template
39
 
40
-
41
  ```
42
  <|im_start|>system
43
  {system_prompt}<|im_end|>
@@ -50,18 +49,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
50
 
51
  | Filename | Quant type | File Size | Description |
52
  | -------- | ---------- | --------- | ----------- |
53
- | [Qwen2.5-0.5B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q2_K.gguf) | Q2_K | 0.315 GB | smallest, significant quality loss - not recommended for most purposes |
54
- | [Qwen2.5-0.5B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.315 GB | very small, high quality loss |
55
- | [Qwen2.5-0.5B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.331 GB | very small, high quality loss |
56
- | [Qwen2.5-0.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.344 GB | small, substantial quality loss |
57
- | [Qwen2.5-0.5B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q4_0.gguf) | Q4_0 | 0.328 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
58
- | [Qwen2.5-0.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.359 GB | small, greater quality loss |
59
- | [Qwen2.5-0.5B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.370 GB | medium, balanced quality - recommended |
60
- | [Qwen2.5-0.5B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q5_0.gguf) | Q5_0 | 0.370 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
61
- | [Qwen2.5-0.5B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.384 GB | large, low quality loss - recommended |
62
- | [Qwen2.5-0.5B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.391 GB | large, very low quality loss - recommended |
63
- | [Qwen2.5-0.5B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q6_K.gguf) | Q6_K | 0.471 GB | very large, extremely low quality loss |
64
- | [Qwen2.5-0.5B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q8_0.gguf) | Q8_0 | 0.495 GB | very large, extremely low quality loss - not recommended |
65
 
66
 
67
  ## Downloading instruction
 
1
  ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
4
  language:
5
  - en
6
+ pipeline_tag: text-generation
7
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
8
  tags:
9
+ - chat
 
10
  - TensorBlock
11
  - GGUF
12
+ library_name: transformers
13
  ---
14
 
15
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
23
  </div>
24
  </div>
25
 
26
+ ## Qwen/Qwen2.5-0.5B-Instruct - GGUF
 
 
27
 
28
+ This repo contains GGUF format model files for [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
29
 
30
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit ec7f3ac](https://github.com/ggerganov/llama.cpp/commit/ec7f3ac9ab33e46b136eb5ab6a76c4d81f57c7f1).
31
 
32
  <div style="text-align: left; margin: 20px 0;">
33
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
 
37
 
38
  ## Prompt template
39
 
 
40
  ```
41
  <|im_start|>system
42
  {system_prompt}<|im_end|>
 
49
 
50
  | Filename | Quant type | File Size | Description |
51
  | -------- | ---------- | --------- | ----------- |
52
+ | [Qwen2.5-0.5B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q2_K.gguf) | Q2_K | 0.339 GB | smallest, significant quality loss - not recommended for most purposes |
53
+ | [Qwen2.5-0.5B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.338 GB | very small, high quality loss |
54
+ | [Qwen2.5-0.5B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.355 GB | very small, high quality loss |
55
+ | [Qwen2.5-0.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.369 GB | small, substantial quality loss |
56
+ | [Qwen2.5-0.5B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q4_0.gguf) | Q4_0 | 0.352 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
57
+ | [Qwen2.5-0.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.385 GB | small, greater quality loss |
58
+ | [Qwen2.5-0.5B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.398 GB | medium, balanced quality - recommended |
59
+ | [Qwen2.5-0.5B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q5_0.gguf) | Q5_0 | 0.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
60
+ | [Qwen2.5-0.5B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.413 GB | large, low quality loss - recommended |
61
+ | [Qwen2.5-0.5B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.420 GB | large, very low quality loss - recommended |
62
+ | [Qwen2.5-0.5B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q6_K.gguf) | Q6_K | 0.506 GB | very large, extremely low quality loss |
63
+ | [Qwen2.5-0.5B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-0.5B-Instruct-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Q8_0.gguf) | Q8_0 | 0.531 GB | very large, extremely low quality loss - not recommended |
64
 
65
 
66
  ## Downloading instruction