Edit model card

Nomic-embed-text-v1.5-Embedding-GGUF

Original Model

nomic-ai/nomic-embed-text-v1.5

Run with LlamaEdge

  • LlamaEdge version: v0.12.3 and above

  • Context size: 768

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:nomic-embed-text-v1.5-f16.gguf \
      llama-api-server.wasm \
      --prompt-template embedding \
      --ctx-size 768 \
      --model-name nomic-embed-text-v1.5
    

Quantized GGUF Models

Name Quant method Bits Size Use case
nomic-embed-text-v1.5-Q2_K.gguf Q2_K 2 60.9 MB smallest, significant quality loss - not recommended for most purposes
nomic-embed-text-v1.5-Q3_K_L.gguf Q3_K_L 3 80.7 MB small, substantial quality loss
nomic-embed-text-v1.5-Q3_K_M.gguf Q3_K_M 3 76.3 MB very small, high quality loss
nomic-embed-text-v1.5-Q3_K_S.gguf Q3_K_S 3 68.8 MB very small, high quality loss
nomic-embed-text-v1.5-Q4_0.gguf Q4_0 4 84.8 MB legacy; small, very high quality loss - prefer using Q3_K_M
nomic-embed-text-v1.5-Q4_K_M.gguf Q4_K_M 4 90.2 MB medium, balanced quality - recommended
nomic-embed-text-v1.5-Q4_K_S.gguf Q4_K_S 4 84.1 MB small, greater quality loss
nomic-embed-text-v1.5-Q5_0.gguf Q5_0 5 98 MB legacy; medium, balanced quality - prefer using Q4_K_M
nomic-embed-text-v1.5-Q5_K_M.gguf Q5_K_M 5 103 MB large, very low quality loss - recommended
nomic-embed-text-v1.5-Q5_K_S.gguf Q5_K_S 5 98 MB large, low quality loss - recommended
nomic-embed-text-v1.5-Q6_K.gguf Q6_K 6 113 MB very large, extremely low quality loss
nomic-embed-text-v1.5-Q8_0.gguf Q8_0 8 146 MB very large, extremely low quality loss - not recommended
nomic-embed-text-v1.5-f16.gguf Q8_0 8 274 MB very large, extremely low quality loss - not recommended

Quantized with llama.cpp b2636

Downloads last month
1,065
GGUF
Model size
137M params
Architecture
nomic-bert

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Quantized from