Ubuntu commited on
Commit
ce130fe
1 Parent(s): b83ecb3

add processing notice

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ ---
4
+ # Vezora/Dolphin-llama-Instruct-8b AWQ
5
+
6
+ ** PROCESSING .... ETA 30mins **
7
+
8
+ - Model creator: [Vezora](https://huggingface.co/Vezora)
9
+ - Original model: [Dolphin-llama-Instruct-8b](https://huggingface.co/Vezora/Dolphin-llama-Instruct-8b)
10
+
11
+ ### About AWQ
12
+
13
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
14
+
15
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
16
+
17
+ It is supported by:
18
+
19
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
20
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
21
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
22
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
23
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code