Edit model card

Acknowledge license to access the repository

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

πŸš€ Falcon-180B-Chat

Falcon-180B-Chat is a 180B parameters causal decoder-only model built by TII based on Falcon-180B and finetuned on a mixture of Ultrachat, Platypus and Airoboros. It is made available under the Falcon-180B TII License and Acceptable Use Policy.

Paper coming soon 😊

πŸ€— To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost from HF or this one from the release of the 40B! Note that since the 180B is larger than what can easily be handled with transformers+acccelerate, we recommend using Text Generation Inference.

You will need at least 400GB of memory to swiftly run inference with Falcon-180B.

Why use Falcon-180B-chat?

  • ✨ You are looking for a ready-to-use chat/instruct model based on Falcon-180B.
  • It is the best open-access model currently available, and one of the best model overall. Falcon-180B outperforms LLaMA-2, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard.
  • It features an architecture optimized for inference, with multiquery (Shazeer et al., 2019).
  • It is made available under a permissive license allowing for commercial use.

πŸ’¬ This is a Chat model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-180B.

πŸ’Έ Looking for a smaller, less expensive model? Falcon-7B-Instruct and Falcon-40B-Instruct are Falcon-180B-Chat's little brothers!

πŸ’₯ Falcon LLMs require PyTorch 2.0 for use with transformers!

Model Card for Falcon-180B-Chat

Model Details

Model Description

Model Source

  • Paper: coming soon.


See the acceptable use policy.

Direct Use

Falcon-180B-Chat has been finetuned on a chat dataset.

Out-of-Scope Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.

Bias, Risks, and Limitations

Falcon-180B-Chat is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.


We recommend users of Falcon-180B-Chat to develop guardrails and to take appropriate precautions for any production use.

How to Get Started with the Model

To run inference with the model in full bfloat16 precision you need approximately 8xA100 80GB or equivalent.

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-180b-chat"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Training Details

Falcon-180B-Chat is based on Falcon-180B.

Training Data

Falcon-180B-Chat is finetuned on a mixture of Ultrachat, Platypus and Airoboros.

The data was tokenized with the Falcon tokenizer.


Paper coming soon.

See the OpenLLM Leaderboard for early results.

Technical Specifications

Model Architecture and Objective

Falcon-180B-Chat is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:

For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.

Hyperparameter Value Comment
Layers 80
d_model 14848
head_dim 64 Reduced to optimise for FlashAttention
Vocabulary 65024
Sequence length 2048

Compute Infrastructure


Falcon-180B-Chat was trained on AWS SageMaker, on up to 4,096 A100 40GB GPUs in P4d instances.


Falcon-180B-Chat was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)


Paper coming soon 😊. In the meanwhile, you can use the following information to cite:

  title={The Falcon Series of Language Models:Towards Open Frontier Models},
  author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},

To learn more about the pretraining dataset, see the πŸ““ RefinedWeb paper.

  title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
  author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
  journal={arXiv preprint arXiv:2306.01116},
  eprinttype = {arXiv},



Downloads last month
Model size
180B params
Tensor type
Inference API (serverless) has been turned off for this model.

Dataset used to train tiiuae/falcon-180B-chat

Spaces using tiiuae/falcon-180B-chat 82

Collection including tiiuae/falcon-180B-chat