language: en | |
license: other | |
tags: | |
- meta | |
- GPTQ | |
- llama | |
- llama2 | |
base_model: meta-llama/Llama-2-7b-chat-hf | |
model_name: Llama-2-7b-chat-hf-AutoGPTQ | |
library: | |
- Transformers | |
- GPTQ | |
model_type: llama | |
pipeline_tag: text-generation | |
qunatized_by: twhoool02 | |
# Model Card for twhoool02/Llama-2-7b-chat-hf-AutoGPTQ | |
## Model Details | |
This model is a GPTQ quantized version of the meta-llama/Llama-2-7b-chat-hf model. | |
- **Developed by:** Ted Whooley | |
- **Library:** Transformers, GPTQ | |
- **Model type:** llama | |
- **Model name:** Llama-2-7b-chat-hf-AutoGPTQ | |
- **Pipeline tag:** text-generation | |
- **Qunatized by:** twhoool02 | |
- **Language(s) (NLP):** en | |
- **License:** other | |