Falcon3-1B-Instruct-GGUF

Original Model

tiiuae/Falcon3-1B-Instruct

Run with LlamaEdge

  • LlamaEdge version: coming soon
  • Prompt template

    • Prompt type: falcon3

    • Prompt string

      <|system|>
      You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible.
      <|user|>
      {user_message}
      <|assistant|>
      
  • Context size: 8000

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-1B-Instruct-Q5_K_M.gguf \
        llama-api-server.wasm \
        --prompt-template falcon3 \
        --ctx-size 8000 \
        --model-name Falcon3-1B-Instruct
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-1B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template falcon3 \
      --ctx-size 8000
    

Quantized GGUF Models

Name Quant method Bits Size Use case
Falcon3-1B-Instruct-Q2_K.gguf Q2_K 2 727 MB smallest, significant quality loss - not recommended for most purposes
Falcon3-1B-Instruct-Q3_K_L.gguf Q3_K_L 3 934 MB small, substantial quality loss
Falcon3-1B-Instruct-Q3_K_M.gguf Q3_K_M 3 885 MB very small, high quality loss
Falcon3-1B-Instruct-Q3_K_S.gguf Q3_K_S 3 827 MB very small, high quality loss
Falcon3-1B-Instruct-Q4_0.gguf Q4_0 4 1.01 GB legacy; small, very high quality loss - prefer using Q3_K_M
Falcon3-1B-Instruct-Q4_K_M.gguf Q4_K_M 4 1.06 GB medium, balanced quality - recommended
Falcon3-1B-Instruct-Q4_K_S.gguf Q4_K_S 4 1.02 GB small, greater quality loss
Falcon3-1B-Instruct-Q5_0.gguf Q5_0 5 1.19 GB legacy; medium, balanced quality - prefer using Q4_K_M
Falcon3-1B-Instruct-Q5_K_M.gguf Q5_K_M 5 1.21 GB large, very low quality loss - recommended
Falcon3-1B-Instruct-Q5_K_S.gguf Q5_K_S 5 1.19 GB large, low quality loss - recommended
Falcon3-1B-Instruct-Q6_K.gguf Q6_K 6 1.37 GB very large, extremely low quality loss
Falcon3-1B-Instruct-Q8_0.gguf Q8_0 8 1.78 GB very large, extremely low quality loss - not recommended
Falcon3-1B-Instruct-f16.gguf f16 16 3.34 GB

Quantized with llama.cpp b4381

Downloads last month
36
GGUF
Model size
1.67B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for second-state/Falcon3-1B-Instruct-GGUF

Quantized
(25)
this model

Collection including second-state/Falcon3-1B-Instruct-GGUF