## Model Details This model is an int4 model with group_size 128 of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) generated by [intel/auto-round](https://github.com/intel/auto-round). ## How To Use ### INT4 Inference with AutoGPTQ ```python ##pip install auto-gptq[triton] ##pip install triton==2.2.0 from auto_gptq import AutoGPTQForCausalLM from transformers import AutoTokenizer quantized_model_dir = "Intel/falcon-7b-int4-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0", use_safetensors=True, use_triton=True, trust_remote_code=True) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "There is a girl who likes adventure,", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### Evaluate the model Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, and the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d, pip install auto-gptq[triton] pip install triton==2.2.0 Since we encountered an issue evaluating this model with lm-eval, we opted to evaluate the qdq model instead. In our assessment, we found that its accuracy closely matches that of the real quantized model in most cases except for some small models like opt-125m. The batch size 32 is used. | Metric | FP16 | int4 qdq | | -------------- | ------ | -------- | | Avg. | 0.5521 | 0.5507 | | mmlu | 0.2495 | 0.2427 | | lambada_openai | 0.7452 | 0.7487 | | hellaswag | 0.5771 | 0.5731 | | winogrande | 0.6725 | 0.6756 | | piqa | 0.7949 | 0.7943 | | truthfulqa_mc1 | 0.2252 | 0.2142 | | openbookqa | 0.3060 | 0.3060 | | boolq | 0.7364 | 0.7382 | | rte | 0.6173 | 0.6245 | | arc_easy | 0.7479 | 0.7433 | | arc_challenge | 0.4019 | 0.3968 | ### Reproduce the model Here is the sample command to reproduce the model ```bash git clone https://github.com/intel/auto-round cd auto-round/examples/language-modeling pip install -r requirements.txt python3 main.py \ --model_name tiiuae/falcon-7b \ --device 0 \ --group_size 128 \ --bits 4 \ --iters 1000 \ --deployment_device 'gpu' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)