English
Edit model card

Model Details

This model is an int4 model with group_size 128 of THUDM/chatglm2-6b generated by intel/auto-round. Inference of this model is compatible with AutoGPTQ's Kernel.

Evaluate the model

Install lm-eval-harness from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d

lm_eval --model hf --model_args pretrained="Intel/chatglm2-6b-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 32

Reproduce the model

Here is the sample command to reproduce the model

git clone https://github.com/intel/auto-round
cd auto-round/examples/language-modeling
pip install -r requirements.txt
python3 main.py \
--model_name THUDM/chatglm2-6b \
--device 0 \
--group_size 128 \
--bits 4 \
--nsamples 512 \
--iters 200 \
--deployment_device 'gpu' \
--disable_quanted_input \
--output_dir "./tmp_autoround" \

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

  • Intel Neural Compressor link
  • Intel Extension for Transformers link

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train Intel/chatglm2-6b-int4-inc