morriszms commited on
Commit
901d1b8
1 Parent(s): e1cd3f1

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ TinyMistral-6x248M-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ TinyMistral-6x248M-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ TinyMistral-6x248M-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ TinyMistral-6x248M-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ TinyMistral-6x248M-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ TinyMistral-6x248M-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ TinyMistral-6x248M-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ TinyMistral-6x248M-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ TinyMistral-6x248M-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ TinyMistral-6x248M-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ TinyMistral-6x248M-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ TinyMistral-6x248M-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - moe
5
+ - frankenmoe
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - Locutusque/TinyMistral-248M-v2
10
+ - Locutusque/TinyMistral-248M-v2.5
11
+ - Locutusque/TinyMistral-248M-v2.5-Instruct
12
+ - jtatman/tinymistral-v2-pycoder-instruct-248m
13
+ - Felladrin/TinyMistral-248M-SFT-v4
14
+ - Locutusque/TinyMistral-248M-v2-Instruct
15
+ - TensorBlock
16
+ - GGUF
17
+ base_model: M4-ai/TinyMistral-6x248M
18
+ inference:
19
+ parameters:
20
+ do_sample: true
21
+ temperature: 0.2
22
+ top_p: 0.14
23
+ top_k: 12
24
+ max_new_tokens: 250
25
+ repetition_penalty: 1.15
26
+ widget:
27
+ - text: '<|im_start|>user
28
+
29
+ Write me a Python program that calculates the factorial of n. <|im_end|>
30
+
31
+ <|im_start|>assistant
32
+
33
+ '
34
+ - text: An emerging clinical approach to treat substance abuse disorders involves
35
+ a form of cognitive-behavioral therapy whereby addicts learn to reduce their reactivity
36
+ to drug-paired stimuli through cue-exposure or extinction training. It is, however,
37
+ datasets:
38
+ - nampdn-ai/mini-peS2o
39
+ ---
40
+
41
+ <div style="width: auto; margin-left: auto; margin-right: auto">
42
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
43
+ </div>
44
+ <div style="display: flex; justify-content: space-between; width: 100%;">
45
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
46
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
47
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
48
+ </p>
49
+ </div>
50
+ </div>
51
+
52
+ ## M4-ai/TinyMistral-6x248M - GGUF
53
+
54
+ This repo contains GGUF format model files for [M4-ai/TinyMistral-6x248M](https://huggingface.co/M4-ai/TinyMistral-6x248M).
55
+
56
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
57
+
58
+ <div style="text-align: left; margin: 20px 0;">
59
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
60
+ Run them on the TensorBlock client using your local machine ↗
61
+ </a>
62
+ </div>
63
+
64
+ ## Prompt template
65
+
66
+ ```
67
+
68
+ ```
69
+
70
+ ## Model file specification
71
+
72
+ | Filename | Quant type | File Size | Description |
73
+ | -------- | ---------- | --------- | ----------- |
74
+ | [TinyMistral-6x248M-Q2_K.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q2_K.gguf) | Q2_K | 0.379 GB | smallest, significant quality loss - not recommended for most purposes |
75
+ | [TinyMistral-6x248M-Q3_K_S.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q3_K_S.gguf) | Q3_K_S | 0.445 GB | very small, high quality loss |
76
+ | [TinyMistral-6x248M-Q3_K_M.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q3_K_M.gguf) | Q3_K_M | 0.487 GB | very small, high quality loss |
77
+ | [TinyMistral-6x248M-Q3_K_L.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q3_K_L.gguf) | Q3_K_L | 0.527 GB | small, substantial quality loss |
78
+ | [TinyMistral-6x248M-Q4_0.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q4_0.gguf) | Q4_0 | 0.574 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
79
+ | [TinyMistral-6x248M-Q4_K_S.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q4_K_S.gguf) | Q4_K_S | 0.577 GB | small, greater quality loss |
80
+ | [TinyMistral-6x248M-Q4_K_M.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q4_K_M.gguf) | Q4_K_M | 0.613 GB | medium, balanced quality - recommended |
81
+ | [TinyMistral-6x248M-Q5_0.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q5_0.gguf) | Q5_0 | 0.695 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
82
+ | [TinyMistral-6x248M-Q5_K_S.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q5_K_S.gguf) | Q5_K_S | 0.695 GB | large, low quality loss - recommended |
83
+ | [TinyMistral-6x248M-Q5_K_M.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q5_K_M.gguf) | Q5_K_M | 0.715 GB | large, very low quality loss - recommended |
84
+ | [TinyMistral-6x248M-Q6_K.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q6_K.gguf) | Q6_K | 0.824 GB | very large, extremely low quality loss |
85
+ | [TinyMistral-6x248M-Q8_0.gguf](https://huggingface.co/tensorblock/TinyMistral-6x248M-GGUF/blob/main/TinyMistral-6x248M-Q8_0.gguf) | Q8_0 | 1.067 GB | very large, extremely low quality loss - not recommended |
86
+
87
+
88
+ ## Downloading instruction
89
+
90
+ ### Command line
91
+
92
+ Firstly, install Huggingface Client
93
+
94
+ ```shell
95
+ pip install -U "huggingface_hub[cli]"
96
+ ```
97
+
98
+ Then, downoad the individual model file the a local directory
99
+
100
+ ```shell
101
+ huggingface-cli download tensorblock/TinyMistral-6x248M-GGUF --include "TinyMistral-6x248M-Q2_K.gguf" --local-dir MY_LOCAL_DIR
102
+ ```
103
+
104
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
105
+
106
+ ```shell
107
+ huggingface-cli download tensorblock/TinyMistral-6x248M-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
108
+ ```
TinyMistral-6x248M-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dae0b8b0e7df28219a9ee712f87b1b8b2a69f80f8f92812a48a00e083e2555f
3
+ size 379042816
TinyMistral-6x248M-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8481b2811284339a32c66688ea3dcd4ba6a9dfd12cbbb8a1f81449bd134ea4df
3
+ size 526804480
TinyMistral-6x248M-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c07813d27399d3f6b9c029950395eb3fc2f803289f3016bf0bf6ee4290615c1
3
+ size 487155200
TinyMistral-6x248M-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a63d3069b5e5c3642e78a82532a09cabd150f6d39669e368309e071ee9b6b893
3
+ size 444892672
TinyMistral-6x248M-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23f572d688759bcb2051f544287196e9f7bc96941b7bdb1c4635a50037dab14b
3
+ size 573747360
TinyMistral-6x248M-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1675eb7c8dbe427264bacaf092fdc6fb83949b5ef088b4986554465e46f97d72
3
+ size 613081248
TinyMistral-6x248M-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88c892829e1de64c6b84d9af61c2f591ae03bccf7857b9209795c63af9e130ab
3
+ size 577024160
TinyMistral-6x248M-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:277cb2a293e6d9ddcfe21a36d329c6c8188f693a025013c89b7b03ae560a434a
3
+ size 695022368
TinyMistral-6x248M-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed3992c98fb215bfb9eee199b2398d5188d6c342095a59e89905ee4984522720
3
+ size 715285280
TinyMistral-6x248M-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63aa5e38d211fe2038d27daa631670d6341cc0ebcb68abc3ece31efc7f9c5ff3
3
+ size 695022368
TinyMistral-6x248M-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abf62b7f7084e3144a57fa1d994881e942bcb3d33b807e074383a538c9de4c3b
3
+ size 823877088
TinyMistral-6x248M-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1edf14cadd41f814b3962387bb443121882f1d6a289cd55f9e7d2ccebd7934b4
3
+ size 1066784608