Upload README.md
Browse files
README.md
CHANGED
@@ -59,7 +59,6 @@ It is also now supported by continuous batching server [vLLM](https://github.com
|
|
59 |
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-AWQ)
|
60 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ)
|
61 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-GGUF)
|
62 |
-
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama-2-7B-GGML)
|
63 |
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
64 |
<!-- repositories-available end -->
|
65 |
|
@@ -83,7 +82,7 @@ Models are released as sharded safetensors files.
|
|
83 |
|
84 |
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
85 |
| ------ | ---- | -- | ----------- | ------- | ---- |
|
86 |
-
| [main](https://huggingface.co/TheBloke/Llama-2-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 |
|
87 |
|
88 |
<!-- README_AWQ.md-provided-files end -->
|
89 |
|
|
|
59 |
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-AWQ)
|
60 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ)
|
61 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-GGUF)
|
|
|
62 |
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
63 |
<!-- repositories-available end -->
|
64 |
|
|
|
82 |
|
83 |
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
84 |
| ------ | ---- | -- | ----------- | ------- | ---- |
|
85 |
+
| [main](https://huggingface.co/TheBloke/Llama-2-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | Processing, coming soon
|
86 |
|
87 |
<!-- README_AWQ.md-provided-files end -->
|
88 |
|