Qubitium commited on
Commit
7fb86d8
1 Parent(s): a1307ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -18,9 +18,10 @@ learning_rate = 1.5e-5
18
  ```
19
  2. Due to nature of BPE (tiktoken), tokenizer expansion/resize is not very friendly to training. Use text based special tokens if you need/use extra tokens to avoid bad train/eval losses
20
 
21
- Known Issues:
22
 
23
- 1. [QUANT GPTQ] PENDING: You can help test quant and/or follow progress at https://github.com/AutoGPTQ/AutoGPTQ/pull/625
 
24
  ---
25
  inference: false
26
  license: other
 
18
  ```
19
  2. Due to nature of BPE (tiktoken), tokenizer expansion/resize is not very friendly to training. Use text based special tokens if you need/use extra tokens to avoid bad train/eval losses
20
 
21
+ Quants:
22
 
23
+ 1. 4bit gptq/marlin: https://huggingface.co/LnL-AI/dbrx-base-converted-v2-4bit-gptq-marlin
24
+ 2. 4bit gptq/gptq: https://huggingface.co/LnL-AI/dbrx-base-converted-v2-4bit-gptq-gptq
25
  ---
26
  inference: false
27
  license: other