Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
awq
TheBloke commited on
Commit
cb3acba
1 Parent(s): b04c94f

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -104,7 +104,7 @@ Models are released as sharded safetensors files.
104
 
105
  | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
106
  | ------ | ---- | -- | ----------- | ------- | ---- |
107
- | [main](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 16384 | 19.23 GB
108
 
109
  <!-- README_AWQ.md-provided-files end -->
110
 
@@ -384,6 +384,7 @@ This model is based on Yi, and is subject to Yi license.
384
 
385
  I used the llama compatible [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) as the base model.
386
 
 
387
  You can load it as follows:
388
 
389
  ```
 
104
 
105
  | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
106
  | ------ | ---- | -- | ----------- | ------- | ---- |
107
+ | [main](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 19.23 GB
108
 
109
  <!-- README_AWQ.md-provided-files end -->
110
 
 
384
 
385
  I used the llama compatible [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) as the base model.
386
 
387
+ Trained with 16k context.
388
  You can load it as follows:
389
 
390
  ```