|
--- |
|
base_model: sentence-transformers/all-MiniLM-L6-v2 |
|
license: apache-2.0 |
|
library_name: sentence-transformers |
|
model_creator: Sentence Transformers |
|
quantized_by: Second State Inc. |
|
language: en |
|
tags: |
|
- sentence-transformers |
|
- feature-extraction |
|
- sentence-similarity |
|
- transformers |
|
--- |
|
|
|
<!-- header start --> |
|
<!-- 200823 --> |
|
<div style="width: auto; margin-left: auto; margin-right: auto"> |
|
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> |
|
<!-- header end --> |
|
|
|
# All-MiniLM-L6-v2-GGUF |
|
|
|
## Original Model |
|
|
|
[sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) |
|
|
|
## Run with LlamaEdge |
|
|
|
- LlamaEdge version: [v0.8.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.8.2) and above |
|
|
|
- Context size: `384` |
|
|
|
- Run as LlamaEdge service |
|
|
|
```bash |
|
wasmedge --dir .:. --nn-preload default:GGML:AUTO:all-MiniLM-L6-v2-ggml-model-f16.gguf \ |
|
llama-api-server.wasm \ |
|
--prompt-template llama-2-chat \ |
|
--ctx-size 384 \ |
|
--model-name all-MiniLM-L6-v2 |
|
``` |
|
|
|
## Quantized GGUF Models |
|
|
|
| Name | Quant method | Bits | Size | Use case | |
|
| ---- | ---- | ---- | ---- | ----- | |
|
| [all-MiniLM-L6-v2-Q2_K.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q2_K.gguf) | Q2_K | 2 | 19.2 MB| smallest, significant quality loss - not recommended for most purposes | |
|
| [all-MiniLM-L6-v2-Q3_K_L.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q3_K_L.gguf) | Q3_K_L | 3 | 20.5 MB| small, substantial quality loss | |
|
| [all-MiniLM-L6-v2-Q3_K_M.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q3_K_M.gguf) | Q3_K_M | 3 | 19.9 MB| very small, high quality loss | |
|
| [all-MiniLM-L6-v2-Q3_K_S.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q3_K_S.gguf) | Q3_K_S | 3 | 19.2 MB| very small, high quality loss | |
|
| [all-MiniLM-L6-v2-Q4_0.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q4_0.gguf) | Q4_0 | 4 | 19.7 MB| legacy; small, very high quality loss - prefer using Q3_K_M | |
|
| [all-MiniLM-L6-v2-Q4_K_M.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q4_K_M.gguf) | Q4_K_M | 4 | 21 MB| medium, balanced quality - recommended | |
|
| [all-MiniLM-L6-v2-Q4_K_S.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q4_K_S.gguf) | Q4_K_S | 4 | 20.7 MB| small, greater quality loss | |
|
| [all-MiniLM-L6-v2-Q5_0.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q5_0.gguf) | Q5_0 | 5 | 21 MB| legacy; medium, balanced quality - prefer using Q4_K_M | |
|
| [all-MiniLM-L6-v2-Q5_K_M.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q5_K_M.gguf) | Q5_K_M | 5 | 21.7 MB| large, very low quality loss - recommended | |
|
| [all-MiniLM-L6-v2-Q5_K_S.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q5_K_S.gguf) | Q5_K_S | 5 | 21.5 MB| large, low quality loss - recommended | |
|
| [all-MiniLM-L6-v2-Q6_K.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q6_K.gguf) | Q6_K | 6 | 24.2 MB| very large, extremely low quality loss | |
|
| [all-MiniLM-L6-v2-Q8_0.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-Q8_0.gguf) | Q8_0 | 8 | 25 MB| very large, extremely low quality loss - not recommended | |
|
| [all-MiniLM-L6-v2-ggml-model-f16.gguf](https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/blob/main/all-MiniLM-L6-v2-ggml-model-f16.gguf) | Q8_0 | 8 | 45.9 MB| very large, extremely low quality loss - not recommended | |
|
|
|
*Quantized with llama.cpp b2334* |
|
|