datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
Falcon-40B-Instruct GPTQ
This repo contains an experimantal GPTQ 4bit model for Falcon-40B-Instruct.
It is the result of quantising to 4bit using AutoGPTQ.
Need support? Want to discuss? I now have a Discord!
Join me at: https://discord.gg/UBgz4VXf
Want to support me and help pay my cloud computing bill? I also now have a Patreon! https://www.patreon.com/TheBlokeAI
EXPERIMENTAL
Please note this is an experimental first model. Support for it is currently quite limited.
To use it you will require:
- AutoGPTQ, from the latest
main
branch and compiled withpip install .
pip install einops
You can then use it immediately from Python code - see example code below
text-generation-webui
There is also provisional AutoGPTQ support in text-generation-webui.
However at the time I'm writing this, a commit is needed to text-generation-webui to enable it to load this model.
I have opened a PR here; once this is merged, text-generation-webui will support this GPTQ model.
To get it working before the PR is merged, you will need to:
- Edit
text-generation-webui/modules/AutoGPTQ_loader.py
- Make the following change:
Find the line that says:
'use_safetensors': use_safetensors,
And after it, add:
'trust_remote_code': shared.args.trust_remote_code,
Once you are done the file should look like this
- Then save and close the file, and launch text-generation-webui as described below
How to download and use this model in text-generation-webui
- Launch text-generation-webui with the following command-line arguments:
--autogptq --trust_remote_code
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/falcon-7B-instruct-GPTQ
. - Click Download.
- Wait until it says it's finished downloading.
- Click the Refresh icon next to Model in the top left.
- In the Model drop-down: choose the model you just downloaded,
falcon-7B-instruct-GPTQ
. - Once it says it's loaded, click the Text Generation tab and enter a prompt!
About trust_remote_code
Please be aware that this command line argument causes Python code provided by Falcon to be executed on your machine.
This code is required at the moment because Falcon is too new to be supported by Hugging Face transformers. At some point in the future transformers will support the model natively, and then trust_remote_code
will no longer be needed.
In this repo you can see two .py
files - these are the files that get executed. They are copied from the base repo at Falcon-7B-Instruct.
Simple Python example code
To run this code you need to install AutoGPTQ from source:
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip install . # This step requires CUDA toolkit installed
And install einops:
pip install einops
You can then run this example code:
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
# Download the model from HF and store it locally, then reference its location here:
quantized_model_dir = "/path/to/falcon7b-instruct-gptq"
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=False)
model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0", use_triton=False, use_safetensors=True, torch_dtype=torch.float32, trust_remote_code=True)
prompt = "Write a story about llamas"
prompt_template = f"### Instruction: {prompt}\n### Response:"
tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
print(tokenizer.decode(output[0]))
Provided files
gptq_model-4bit.safetensors
This will work with AutoGPTQ as of commit 3cb1bf5
(3cb1bf5a6d43a06dc34c6442287965d1838303d3
)
It was created with no groupsize to reduce VRAM requirements as much as possible, with desc_act
(act-order) to increase inference quality.
gptq_model-4bit.safetensors
- Works only with latest AutoGPTQ CUDA, compiled from source as of commit
3cb1bf5
- At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time.
- Works with text-generation-webui using
--autogptq --trust_remote_code
- At this time it does NOT work with one-click-installers
- Does not work with any version of GPTQ-for-LLaMa
- Parameters: Groupsize = 64. No act-order.
- Works only with latest AutoGPTQ CUDA, compiled from source as of commit
Want to support my work?
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI proejcts.
- Patreon: coming soon! (just awaiting approval)
- Ko-Fi: https://ko-fi.com/TheBlokeAI
✨ Original model card: Falcon-40B-Instruct
✨ Falcon-40B-Instruct
Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the TII Falcon LLM License.
Paper coming soon 😊.
Why use Falcon-40B-Instruct?
- You are looking for a ready-to-use chat/instruct model based on Falcon-40B.
- Falcon-40B is the best open-source model available. It outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard.
- It features an architecture optimized for inference, with FlashAttention (Dao et al., 2022) and multiquery (Shazeer et al., 2019).
💬 This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-40B.
💸 Looking for a smaller, less expensive model? Falcon-7B-Instruct is Falcon-40B-Instruct's small brother!
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Model Card for Falcon-40B-Instruct
Model Details
Model Description
- Developed by: https://www.tii.ae;
- Model type: Causal decoder-only;
- Language(s) (NLP): English and French;
- License: TII Falcon LLM License;
- Finetuned from model: Falcon-7B.
Model Source
- Paper: coming soon.
Uses
Direct Use
Falcon-40B-Instruct has been finetuned on a chat dataset.
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Training Details
Training Data
Falcon-40B-Instruct was finetuned on a 150M tokens from Bai ze mixed with 5% of RefinedWeb data.
The data was tokenized with the Falcon-7B/40B tokenizer.
Evaluation
Paper coming soon.
See the OpenLLM Leaderboard for early results.
Technical Specifications
For more information about pretraining, see Falcon-40B.
Model Architecture and Objective
Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
- Positionnal embeddings: rotary (Su et al., 2021);
- Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
- Decoder-block: parallel attention/MLP with a single layer norm.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
Hyperparameter | Value | Comment |
---|---|---|
Layers | 60 | |
d_model |
8192 | |
head_dim |
64 | Reduced to optimise for FlashAttention |
Vocabulary | 65024 | |
Sequence length | 2048 |
Compute Infrastructure
Hardware
Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances.
Software
Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
Citation
Paper coming soon 😊.
License
Falcon-40B-Instruct is made available under the TII Falcon LLM License. Broadly speaking,
- You can freely use our models for research and/or personal purpose;
- You are allowed to share and build derivatives of these models, but you are required to give attribution and to share-alike with the same license;
- For commercial use, you are exempt from royalties payment if the attributable revenues are inferior to $1M/year, otherwise you should enter in a commercial agreement with TII.