Edit model card

Gemma-2b-it

Original Model

google/gemma-2b-it

Run with LlamaEdge

  • LlamaEdge version: v0.3.2

  • Prompt template

    • Prompt type: gemma-instruct

    • Prompt string

      <start_of_turn>user
      {user_message}<end_of_turn>
      <start_of_turn>model
      {model_message}<end_of_turn>model
      
  • Context size: 2048

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-2b-it-Q5_K_M.gguf llama-api-server.wasm -p gemma-instruct -c 4096
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-2b-it-Q5_K_M.gguf llama-chat.wasm -p gemma-instruct -c 4096
    

Quantized GGUF Models

Name Quant method Bits Size Use case
gemma-2b-it-Q2_K.gguf Q2_K 2 900 MB smallest, significant quality loss - not recommended for most purposes
gemma-2b-it-Q3_K_L.gguf Q3_K_L 3 1.26 GB small, substantial quality loss
gemma-2b-it-Q3_K_M.gguf Q3_K_M 3 1.18 GB very small, high quality loss
gemma-2b-it-Q3_K_S.gguf Q3_K_S 3 1.08 GB very small, high quality loss
gemma-2b-it-Q4_0.gguf Q4_0 4 1.42 GB legacy; small, very high quality loss - prefer using Q3_K_M
gemma-2b-it-Q4_K_M.gguf Q4_K_M 4 1.5 GB medium, balanced quality - recommended
gemma-2b-it-Q4_K_S.gguf Q4_K_S 4 1.42 GB small, greater quality loss
gemma-2b-it-Q5_0.gguf Q5_0 5 1.73 GB legacy; medium, balanced quality - prefer using Q4_K_M
gemma-2b-it-Q5_K_M.gguf Q5_K_M 5 1.77 GB large, very low quality loss - recommended
gemma-2b-it-Q5_K_S.gguf Q5_K_S 5 1.73 GB large, low quality loss - recommended
gemma-2b-it-Q6_K.gguf Q6_K 6 2.06 GB very large, extremely low quality loss
gemma-2b-it-Q8_0.gguf Q8_0 8 2.67 GB very large, extremely low quality loss - not recommended

Quantized with llama.cpp b2230

Downloads last month
383
GGUF
Model size
2.51B params
Architecture
gemma
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from