--- license: apache-2.0 tags: - finetuned - quantized - 4-bit - gptq - transformers - pytorch - mistral - text-generation - LLMs - math - Intel - en - dataset:meta-math/MetaMathQA - dataset:Intel/orca_dpo_pairs - arxiv:2309.12284 - base_model:meta-math/MetaMath-Mistral-7B - license:apache-2.0 - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us model_name: neural-chat-7b-v3-3-Slerp-GPTQ base_model: Intel/neural-chat-7b-v3-3-Slerp inference: false model_creator: Intel pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # Description [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-GPTQ](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-GPTQ) is a quantized (GPTQ) version of [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp) ## How to use ### Install the necessary packages ``` pip install --upgrade accelerate auto-gptq transformers ``` ### Example Python code ```python from transformers import AutoTokenizer, pipeline from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import torch model_id = "MaziyarPanahi/neural-chat-7b-v3-3-Slerp-GPTQ" quantize_config = BaseQuantizeConfig( bits=4, group_size=128, desc_act=False ) model = AutoGPTQForCausalLM.from_quantized( model_id, use_safetensors=True, device="cuda:0", quantize_config=quantize_config) tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.1 ) outputs = pipe("What is a large language model?") print(outputs[0]["generated_text"]) ```