File size: 569 Bytes
d3f780f b8d8a11 cc44d48 d31444e cc44d48 d31444e cc44d48 d31444e cc44d48 d31444e cc44d48 d31444e cc44d48 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
pipeline_tag: text-generation
---
# SearchUnify-ML/xgen-7b-8k-open-instruct-gptq
These are GPTQ 4bit model files for [VMWare's XGEN 7B 8K Open Instruct](https://huggingface.co/VMware/xgen-7b-8k-open-instruct).
It is the result of quantising to 4bit using GPTQ-for-LLaMa.
The model is open for COMMERCIAL USE.
## How to use this GPTQ model from Python code
First, make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
#### pip install auto-gptq
|