Upload folder using huggingface_hub
Browse files- .gitattributes +7 -0
- Mistral-7B-Instruct-v0.3-BF16.gguf +3 -0
- Mistral-7B-Instruct-v0.3-F32.gguf +3 -0
- Mistral-7B-Instruct-v0.3-Q2_K.gguf +3 -0
- Mistral-7B-Instruct-v0.3-Q4_K_M.gguf +3 -0
- Mistral-7B-Instruct-v0.3-Q5_K_M.gguf +3 -0
- Mistral-7B-Instruct-v0.3-Q6_K.gguf +3 -0
- Mistral-7B-Instruct-v0.3-Q8_0.gguf +3 -0
- README.md +77 -0
- imgs/7bv3-kde.png +0 -0
- imgs/7bv3-mean.png +0 -0
- imgs/7bv3-range.png +0 -0
- imgs/7bv3-stddev.png +0 -0
- sha256/Mistral-7B-Instruct-v0.3-BF16.sha256 +1 -0
- sha256/Mistral-7B-Instruct-v0.3-F32.sha256 +1 -0
- sha256/Mistral-7B-Instruct-v0.3-Q2_K.sha256 +1 -0
- sha256/Mistral-7B-Instruct-v0.3-Q4_K_M.sha256 +1 -0
- sha256/Mistral-7B-Instruct-v0.3-Q5_K_M.sha256 +1 -0
- sha256/Mistral-7B-Instruct-v0.3-Q6_K.sha256 +1 -0
- sha256/Mistral-7B-Instruct-v0.3-Q8_0.sha256 +1 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Mistral-7B-Instruct-v0.3-BF16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Mistral-7B-Instruct-v0.3-F32.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
Mistral-7B-Instruct-v0.3-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
Mistral-7B-Instruct-v0.3-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
Mistral-7B-Instruct-v0.3-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
Mistral-7B-Instruct-v0.3-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
Mistral-7B-Instruct-v0.3-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
Mistral-7B-Instruct-v0.3-BF16.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:56dfeb0a35c1be35fb2eab06db95a1240c3bfd3aef9efce694dbcfc9ceb05e9e
|
| 3 |
+
size 14497341504
|
Mistral-7B-Instruct-v0.3-F32.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:581023dde2206f80b77c2bf4633fa1922aeaaa13c0fbc1181c0dd33e4b643f9c
|
| 3 |
+
size 28992856128
|
Mistral-7B-Instruct-v0.3-Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aadb810dc76ad8d2972fecd369dd1051f959c7b0bce1eebc64acfe2ca014d6b7
|
| 3 |
+
size 2722881600
|
Mistral-7B-Instruct-v0.3-Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:361e8b5b765dc0be5f78dfaa85914014ce216cbde3359ba0fb6fdf6f4571fad2
|
| 3 |
+
size 4372815936
|
Mistral-7B-Instruct-v0.3-Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aeae46d2c43c38eb399b0c0588e1dabaadf02dd603a983069dac17101124c9d3
|
| 3 |
+
size 5136179264
|
Mistral-7B-Instruct-v0.3-Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:58938438a6aeae9553252b0b61524506a06424c71088a78861729a3187f76948
|
| 3 |
+
size 5947252800
|
Mistral-7B-Instruct-v0.3-Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a0e5d88f57fd1b03bd8ee60111caaaebb1bd8565d87ee4899cc389f134baa06
|
| 3 |
+
size 7702569024
|
README.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model:
|
| 3 |
+
- mistralai/Mistral-7B-Instruct-v0.3
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
model_creator: Mistral AI
|
| 7 |
+
model_name: Mistral-7B-Instruct-v0.3
|
| 8 |
+
model_type: llama
|
| 9 |
+
quantized_by: s3dev-ai
|
| 10 |
+
tags:
|
| 11 |
+
- text-generation
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Overview
|
| 15 |
+
|
| 16 |
+
This model repository provides various quantisations of the following [base model](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3), in GGUF format.
|
| 17 |
+
- mistralai/Ministral-7B-Instruct-v0.3
|
| 18 |
+
|
| 19 |
+
# Model Description
|
| 20 |
+
|
| 21 |
+
For a full model description, please refer to the [base model's](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) card.
|
| 22 |
+
|
| 23 |
+
This model, and subsequent quantisations, have been converted directly from the author's base model *unaltered*.
|
| 24 |
+
|
| 25 |
+
## How are the GGUF files created?
|
| 26 |
+
After cloning the author's original base model repository, [`llama.cpp`](https://github.com/ggml-org/llama.cpp) is used to convert the model to GGUF format, using `--outtype=f32` to preserve the original model's 32-bit fidelity.
|
| 27 |
+
|
| 28 |
+
Finally, for each subsequent quantisation level, `llama.cpp`'s `llama-quantize` executable is called using the F32 GGUF file as the source file.
|
| 29 |
+
|
| 30 |
+
# Quantisation
|
| 31 |
+
The purpose of this repository is to provide *unaltered* quantisations of the author's base model. This section is designed to help the user visualise the difference in quantisation levels, in efforts to assist in model (quantisation) selection.
|
| 32 |
+
|
| 33 |
+
## Comparison Statistics
|
| 34 |
+
To aid a user in model/quantisation selection, the team has created the following statistics specifically for comparing the similarity scores across quantisation runs.
|
| 35 |
+
|
| 36 |
+
The dataset against which each run was conducted is composed of 175 question/answer pairs, divided amongst 7 topics, specifically designed to test a quantisation's processing ability. The test dataset was created by Mistral Large (via [Le Chat](https://chat.mistral.ai/chat)) using prompts explicitly stating the requirement for the question/answer pairs to be designed for Mistral model quantisation testing.
|
| 37 |
+
|
| 38 |
+
The similarity scores used by these statistics were calculated as the cosine similarity between the embedding of the 'gold standard' answer provided in the dataset, and the embedding of the response from the quantised model. The embedding model used in these tests is the [all-MiniLM-L6-v2 Q8_0](https://huggingface.co/s3dev-ai/all-MiniLM-L6-v2-gguf). We are also planning to repeat this test using the [embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) model to determine if the results can be enhanced.
|
| 39 |
+
|
| 40 |
+
### Range
|
| 41 |
+
The range graph below illustrates how the range of similarity scores varies amongst the quantisation levels. Included in the range stats are the:
|
| 42 |
+
|
| 43 |
+
- Minimum scores
|
| 44 |
+
- Maximum scores
|
| 45 |
+
- Mean scores
|
| 46 |
+
- Score distribution (KDE)
|
| 47 |
+
- Outliers
|
| 48 |
+
|
| 49 |
+
<!-- Range image -->
|
| 50 |
+
<div align="center">
|
| 51 |
+
<img src="imgs/7bv3-range.png" alt="Stats Graph: Range" width="90%">
|
| 52 |
+
</div>
|
| 53 |
+
|
| 54 |
+
### Mean
|
| 55 |
+
The mean graph below illustrates how the mean similarity scores (when grouped by 'topic') vary amongst the quantisation levels.
|
| 56 |
+
|
| 57 |
+
<!-- Mean image -->
|
| 58 |
+
<div align="center">
|
| 59 |
+
<img src="imgs/7bv3-mean.png" alt="Stats Graph: Mean" width="90%">
|
| 60 |
+
</div>
|
| 61 |
+
|
| 62 |
+
### Standard Deviation
|
| 63 |
+
The standard deviation graph below illustrates the how spread of similarity scores vary amongst the quantisation levels, when grouped by the test dataset's 'topic' categories.
|
| 64 |
+
|
| 65 |
+
<!-- StdDev image -->
|
| 66 |
+
<div align="center">
|
| 67 |
+
<img src="imgs/7bv3-stddev.png" alt="Stats Graph: StdDev" width="90%">
|
| 68 |
+
</div>
|
| 69 |
+
|
| 70 |
+
### Kernel Density Estimate
|
| 71 |
+
The KDE graph below illustrates the how distribution of similarity scores vary amongst the quantisation levels.
|
| 72 |
+
|
| 73 |
+
<!-- KDE image -->
|
| 74 |
+
<div align="center">
|
| 75 |
+
<img src="imgs/7bv3-kde.png" alt="Stats Graph: KDE" width="90%">
|
| 76 |
+
</div>
|
| 77 |
+
|
imgs/7bv3-kde.png
ADDED
|
imgs/7bv3-mean.png
ADDED
|
imgs/7bv3-range.png
ADDED
|
imgs/7bv3-stddev.png
ADDED
|
sha256/Mistral-7B-Instruct-v0.3-BF16.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
56dfeb0a35c1be35fb2eab06db95a1240c3bfd3aef9efce694dbcfc9ceb05e9e Mistral-7B-Instruct-v0.3-BF16.gguf
|
sha256/Mistral-7B-Instruct-v0.3-F32.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
581023dde2206f80b77c2bf4633fa1922aeaaa13c0fbc1181c0dd33e4b643f9c Mistral-7B-Instruct-v0.3-F32.gguf
|
sha256/Mistral-7B-Instruct-v0.3-Q2_K.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
aadb810dc76ad8d2972fecd369dd1051f959c7b0bce1eebc64acfe2ca014d6b7 Mistral-7B-Instruct-v0.3-Q2_K.gguf
|
sha256/Mistral-7B-Instruct-v0.3-Q4_K_M.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
361e8b5b765dc0be5f78dfaa85914014ce216cbde3359ba0fb6fdf6f4571fad2 Mistral-7B-Instruct-v0.3-Q4_K_M.gguf
|
sha256/Mistral-7B-Instruct-v0.3-Q5_K_M.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
aeae46d2c43c38eb399b0c0588e1dabaadf02dd603a983069dac17101124c9d3 Mistral-7B-Instruct-v0.3-Q5_K_M.gguf
|
sha256/Mistral-7B-Instruct-v0.3-Q6_K.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
58938438a6aeae9553252b0b61524506a06424c71088a78861729a3187f76948 Mistral-7B-Instruct-v0.3-Q6_K.gguf
|
sha256/Mistral-7B-Instruct-v0.3-Q8_0.sha256
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
1a0e5d88f57fd1b03bd8ee60111caaaebb1bd8565d87ee4899cc389f134baa06 Mistral-7B-Instruct-v0.3-Q8_0.gguf
|