morriszms commited on
Commit
fc15524
1 Parent(s): a2e79c5

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ internlm2-chat-7b-sft-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ internlm2-chat-7b-sft-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ internlm2-chat-7b-sft-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ internlm2-chat-7b-sft-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ internlm2-chat-7b-sft-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ internlm2-chat-7b-sft-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ internlm2-chat-7b-sft-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ internlm2-chat-7b-sft-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ internlm2-chat-7b-sft-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ internlm2-chat-7b-sft-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ internlm2-chat-7b-sft-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ internlm2-chat-7b-sft-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ license: other
4
+ tags:
5
+ - TensorBlock
6
+ - GGUF
7
+ base_model: internlm/internlm2-chat-7b-sft
8
+ ---
9
+
10
+ <div style="width: auto; margin-left: auto; margin-right: auto">
11
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
12
+ </div>
13
+ <div style="display: flex; justify-content: space-between; width: 100%;">
14
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
15
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
16
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
17
+ </p>
18
+ </div>
19
+ </div>
20
+
21
+ ## internlm/internlm2-chat-7b-sft - GGUF
22
+
23
+ This repo contains GGUF format model files for [internlm/internlm2-chat-7b-sft](https://huggingface.co/internlm/internlm2-chat-7b-sft).
24
+
25
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
26
+
27
+ ## Prompt template
28
+
29
+ ```
30
+ <s><|im_start|>system
31
+ {system_prompt}<|im_end|>
32
+ <|im_start|>user
33
+ {prompt}<|im_end|>
34
+ <|im_start|>assistant
35
+ ```
36
+
37
+ ## Model file specification
38
+
39
+ | Filename | Quant type | File Size | Description |
40
+ | -------- | ---------- | --------- | ----------- |
41
+ | [internlm2-chat-7b-sft-Q2_K.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q2_K.gguf) | Q2_K | 2.799 GB | smallest, significant quality loss - not recommended for most purposes |
42
+ | [internlm2-chat-7b-sft-Q3_K_S.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q3_K_S.gguf) | Q3_K_S | 3.237 GB | very small, high quality loss |
43
+ | [internlm2-chat-7b-sft-Q3_K_M.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q3_K_M.gguf) | Q3_K_M | 3.567 GB | very small, high quality loss |
44
+ | [internlm2-chat-7b-sft-Q3_K_L.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q3_K_L.gguf) | Q3_K_L | 3.850 GB | small, substantial quality loss |
45
+ | [internlm2-chat-7b-sft-Q4_0.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q4_0.gguf) | Q4_0 | 4.147 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
46
+ | [internlm2-chat-7b-sft-Q4_K_S.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q4_K_S.gguf) | Q4_K_S | 4.177 GB | small, greater quality loss |
47
+ | [internlm2-chat-7b-sft-Q4_K_M.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q4_K_M.gguf) | Q4_K_M | 4.389 GB | medium, balanced quality - recommended |
48
+ | [internlm2-chat-7b-sft-Q5_0.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q5_0.gguf) | Q5_0 | 5.004 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
49
+ | [internlm2-chat-7b-sft-Q5_K_S.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q5_K_S.gguf) | Q5_K_S | 5.004 GB | large, low quality loss - recommended |
50
+ | [internlm2-chat-7b-sft-Q5_K_M.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q5_K_M.gguf) | Q5_K_M | 5.129 GB | large, very low quality loss - recommended |
51
+ | [internlm2-chat-7b-sft-Q6_K.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q6_K.gguf) | Q6_K | 5.914 GB | very large, extremely low quality loss |
52
+ | [internlm2-chat-7b-sft-Q8_0.gguf](https://huggingface.co/tensorblock/internlm2-chat-7b-sft-GGUF/tree/main/internlm2-chat-7b-sft-Q8_0.gguf) | Q8_0 | 7.659 GB | very large, extremely low quality loss - not recommended |
53
+
54
+
55
+ ## Downloading instruction
56
+
57
+ ### Command line
58
+
59
+ Firstly, install Huggingface Client
60
+
61
+ ```shell
62
+ pip install -U "huggingface_hub[cli]"
63
+ ```
64
+
65
+ Then, downoad the individual model file the a local directory
66
+
67
+ ```shell
68
+ huggingface-cli download tensorblock/internlm2-chat-7b-sft-GGUF --include "internlm2-chat-7b-sft-Q2_K.gguf" --local-dir MY_LOCAL_DIR
69
+ ```
70
+
71
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
72
+
73
+ ```shell
74
+ huggingface-cli download tensorblock/internlm2-chat-7b-sft-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
75
+ ```
internlm2-chat-7b-sft-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2727579092bf33939004f70de18164a307b32ae7ec3b1472c3a7037ef1196174
3
+ size 3005449312
internlm2-chat-7b-sft-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0d2d9284252274266616753566d038c9f14d0be6dd4ea76aa39aa58ae2a9721
3
+ size 4133418080
internlm2-chat-7b-sft-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7630c1621b6bd563c51a4e66b2d450e4411dbf64ae5bd0900b924370727bb039
3
+ size 3830379616
internlm2-chat-7b-sft-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f55f524a2dfc3a10f8fb4681db3d67ec9f96c9cb007a46ecfcb7ea730c04468
3
+ size 3475960928
internlm2-chat-7b-sft-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fb2a6bebf8d74ad97a31bfd0cf4c6ec2e1a2cd19a3f1f4f0326409087eccdd1
3
+ size 4453246048
internlm2-chat-7b-sft-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ce6fee21c7cc002dd68e0370d1a18a9b7bdc4c193aaa32003625ab1f2f65eec
3
+ size 4712768608
internlm2-chat-7b-sft-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df52bb79bc2be87ea9a13fbad138941b39afab2d8e5a9eea7fa58d5fc4578e5a
3
+ size 4484703328
internlm2-chat-7b-sft-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be1123cbba85ea5f56c6c8d465c8bd1e0caa4f08d5188c58d127177bf1963fd2
3
+ size 5373043808
internlm2-chat-7b-sft-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42c93144c1dab9bf37f23c75a671dad2fe4b191f30f7c66f4d888f624f0b353b
3
+ size 5506737248
internlm2-chat-7b-sft-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51924552bc27836c4630886135c02c6bd6706e79e7d361166628a198e8d12d4b
3
+ size 5373043808
internlm2-chat-7b-sft-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a739dee2cb6f8d9f59373be815c44ebdd3105ef8d91dd456198e9a8e8fbbf7cb
3
+ size 6350328928
internlm2-chat-7b-sft-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84170d7c1f12d4367de3b05348096ad846c566e8a74e794172d586b61ed4e213
3
+ size 8224240736