Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

FalconLite Model

FalconLite is a quantized version of the Falcon 40B SFT OASST-TOP1 model, capable of processing long (i.e. 11K tokens) input sequences while consuming 4x less GPU memory. By utilizing 4-bit GPTQ quantization and adapted dynamic NTK RotaryEmbedding, FalconLite achieves a balance between latency, accuracy, and memory efficiency. With the ability to process 5x longer contexts than the original model, FalconLite is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite can be deployed on a single AWS g5.12x instance with TGI 0.9.2, making it suitable for applications that require high performance in resource-constrained environments.

Model Details

Deploy FalconLite

SSH login to an AWS g5.12x instance with the Deep Learning AMI.

Start LLM server

git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev
cd falconlite-dev/script
./docker_build.sh
./start_falconlite.sh

Perform inference

# after FalconLite has been completely started
pip install -r requirements-client.txt
python falconlite_client.py

New! Amazon SageMaker Deployment

To deploy FalconLite on SageMaker endpoint, please follow this notebook.

Important - When using FalconLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.

Evalution Result

We evaluated FalconLite against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts. All evaluations were conducted without fine-tuning the model.

Accuracy

Eval task Input length Input length Input length Input length
2800 ~ 3800 5500 ~ 5600 7500 ~ 8300 9300 ~ 11000
Topic Retrieval 100% 100% 92% 92%
Line Retrieval 38% 12% 8% 4%
Pass key Retrieval 100% 100% 100% 100%
Eval task Test set Accuracy Hard subset Accuracy
Question Answering with Long Input Texts 46.9% 40.8%

Performance

metrics = the average number of generated tokens per second (TPS) =

nb-generated-tokens / end-to-end-response-time

The end-to-end-response-time = when the last token is generated - when the inference request is received

Instance Input length Input length Input length Input length
20 3300 5500 10000
g5.48x 22 tps 12 tps 12 tps 12 tps
g5.12x 18 tps 11 tps 11 tps 10 tps

Limitations

  • Our evaluation shows that FalconLite's capability in Line Retrieval is limited, and requires further effort.
  • While g5.12x is sufficient for FalconLite to handle 10K long contexts, a larger instance with more memory capcacity such as g5.48x is recommended for sustained, heavy workloads.
  • Before using the FalconLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
Downloads last month
0
Inference Examples
Inference API (serverless) has been turned off for this model.