--- library_name: transformers license: gemma base_model: - google/gemma-2-9b-it --- # This model has been xMADified! This repository contains [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology. # Why should I use this model? 1. **Accuracy:** This xMADified model is the *best* quantized version of the [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) model (8 GB only). See _Table 1_ below for model quality benchmarks. 2. **Memory-efficiency:** The full-precision model is around 18.5 GB, while this xMADified model is only around 8 GB, making it feasible to run on a 12 GB GPU. 3. **Fine-tuning**: These models are fine-tunable over the same reduced (12 GB GPU) hardware in mere 3-clicks. Watch our product demo [here](https://www.youtube.com/watch?v=S0wX32kT90s&list=TLGGL9fvmJ-d4xsxODEwMjAyNA) ## Table 1: xMAD vs. Hugging Quants | Model | MMLU | Arc Challenge | Arc Easy | LAMBADA Standard | LAMBADA OpenAI | PIQA | WinoGrande | |---|---|---|---|---|---|---|---| | [xmadai/gemma-2-9b-it-xMADai-INT4](https://huggingface.co/xmadai/gemma-2-9b-it-xMADai-INT4) (this model) | **71.17** | **62.37** | **85.61** | **70.60** | **72.15** | **81.50** | **75.06** | | [hugging-quants/gemma-2-9b-it-AWQ-INT4](https://huggingface.co/hugging-quants/gemma-2-9b-it-AWQ-INT4) | 71.04 | 61.77 | 85.14 | 69.16 | 70.68 | 80.41 | 75.06 | # How to Run Model Loading the model checkpoint of this xMADified model requires around 8 GB of VRAM. Hence it can be efficiently run on a 12 GB GPU. **Package prerequisites**: 1. Run the following *commands to install the required packages. ```bash pip install torch==2.4.0 # Run following if you have CUDA version 11.8: pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118 pip install transformers accelerate optimum pip install -vvv --no-build-isolation "git+https://github.com/PanQiWei/AutoGPTQ.git@v0.7.1" ``` **Sample Inference Code** ```python from transformers import AutoTokenizer from auto_gptq import AutoGPTQForCausalLM model_id = "xmadai/gemma-2-9b-it-xMADai-INT4" prompt = [ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."}, {"role": "user", "content": "What's Deep Learning?"}, ] tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) inputs = tokenizer.apply_chat_template( prompt, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to("cuda") model = AutoGPTQForCausalLM.from_quantized( model_id, device_map='auto', trust_remote_code=True, ) outputs = model.generate(**inputs, do_sample=True, max_new_tokens=1024) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` # Citation If you found this model useful, please cite our research paper. ``` @article{zhang2024leanquant, title={LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid}, author={Zhang, Tianyi and Shrivastava, Anshumali}, journal={arXiv preprint arXiv:2407.10032}, year={2024}, url={https://arxiv.org/abs/2407.10032}, } ``` # Contact Us For additional xMADified models, access to fine-tuning, and general questions, please contact us at support@xmad.ai and join our waiting list.