ybelkada's picture
Update README.md
013203b verified
metadata
language:
  - en
tags:
  - falcon3
  - falcon3_mamba
  - falcon_mamba
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
library_name: transformers
drawing

Falcon3-Mamba-7B-Base

Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.

This repository contains the Falcon3-Mamba-7B. It achieves, compared to similar SSM-based models of the same size, state of art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-Mamba-7B-Base supports a context length up to 32K and was mainly trained on english corpus.

Model Details

  • Architecture (same as Falcon-Mamba-7b)
    • Mamba1 based causal decoder only architecture trained on a causal language modeling task (i.e., predict the next token).
    • 64 decoder blocks
    • width: 4096
    • state dimension: 16
    • 32k context length
    • 65k vocab size
  • Continue Pretrained from Falcon Mamba 7B, with another 1500 Gigatokens of data comprising of web, code, STEM and high quality data.
  • Postrained on 1.2 million samples of STEM, conversations, code, and safety.
  • Developed by Technology Innovation Institute
  • License: TII Falcon-LLM License 2.0
  • Model Release Date: December 2024

Getting started

Click to expand
from transformers import AutoTokenizer, AutoModelForCausalLM


from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "tiiuae/Falcon3-Mamba-7B-Base"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many hours in one day?"
messages = [
    {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Benchmarks

We report in the following table our internal pipeline benchmarks. For the benchmarks marked by star, we normalize the results with HuggingFace score normalization:

Category Benchmark Zamba2-7B Llama-3.1-8B Falcon-Mamba-7B Falcon3-Mamba-7B-Base
General MMLU (5-shot) 64.9 66.4 59.9 64.9
MMLU-PRO (5-shot)* 24.5 24.9 14.5 22.6
IFEval 37.4 12.7 33.4 30.1
Math GSM8K (5-shot) 55.8 47.9 51.3 65.9
MATH (4-shot) 10.3 5.1 3.6 15.6
Reasoning Arc Challenge (25-shot) 54.1 58.5 55.9 56.7
GPQA (0-shot)* 9.4 6.2 8.1 10.6
MUSR (0-shot)* 7.5 8.9 10.9 4.5
BBH (3-shot)* 27.9 25.3 19.9 25.6
CommonSense Understanding PIQA (0-shot) 79.27 81.2 80.2 79.54
SciQ (0-shot) 94.4 94.6 96.3 92.0
Winogrande (0-shot) 77.4 74.0 74.9 71.27

Useful links

Citation

If the Falcon3 family of models were helpful to your work, feel free to give us a cite.

@misc{Falcon3,
    title = {The Falcon 3 Family of Open Models},
    author = {Falcon-LLM Team},
    month = {December},
    year = {2024}
}