GGUF
falcon3
Inference Endpoints
conversational
drawing

Falcon3-3B-Instruct-GGUF

Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.

Falcon3-3B-Instruct achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.

This repository contains the GGUFs instruction-tuned 3B Falcon3 model.

Model Details

  • Architecture
    • Transformer-based causal decoder-only architecture
    • 22 decoder blocks
    • Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
    • Wider head dimension: 256
    • High RoPE value to support long context understanding: 1000042
    • Uses SwiGLU and RMSNorm
    • 32K context length
    • 131K vocab size
  • Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
  • Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
  • Supports EN, FR, ES, PT
  • Developed by Technology Innovation Institute
  • License: TII Falcon-LLM License 2.0
  • Model Release Date: December 2024
  • Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0

Getting started

1. Download GGUF models from hugging face

First, download the model from Hugging Face. You can use the huggingface_hub library or download it manually:

pip  install  huggingface_hub
huggingface-cli  download  {model_name}

This will download the model to your current directory. Make sure to replace {model_name} with the actual username and model name from your Hugging Face repository.

2. Install llama.cpp

You have several options for installing llama.cpp:

1. Build from source:

This gives you the most flexibility and control. Follow the instructions in the llama.cpp repository to build from source:


git  clone  https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release

For more information about how to build llama.cpp from source please refere to llama.cpp documentation on how to build from source: llama.cpp build from source.

2. Download pre-built binaries:

If you prefer a quicker setup, you can download pre-built binaries for your operating system. Check the llama.cpp repository for available binaries.

3. Use Docker:

For a more contained environment, you can use the official llama.cpp Docker image. Refer to the llama.cpp documentation for instructions on how to use the Docker image.

For detailed instructions and more information, please check the llama.cpp documentation on docker: llama.cpp docker.

3. Start playing with your model

Run simple text completion

llama-cli -m {path-to-gguf-model}  -p  "I believe the meaning of life is"  -n  128

Run in conversation mode

llama-cli -m {path-to-gguf-model}  -p  "You are a helpful assistant"  -cnv  -co

Useful links

Technical Report

Coming soon....

Citation

If the Falcon3 family of models were helpful to your work, feel free to give us a cite.

@misc{Falcon3,
    title = {The Falcon 3 Family of Open Models},
    url = {https://huggingface.co/blog/falcon3},
    author = {Falcon-LLM Team},
    month = {December},
    year = {2024}
}
Downloads last month
9
GGUF
Model size
3.23B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for tiiuae/Falcon3-3B-Instruct-GGUF

Unable to build the model tree, the base model loops to the model itself. Learn more.

Collection including tiiuae/Falcon3-3B-Instruct-GGUF