MetaIX commited on
Commit
7d6f1b5
1 Parent(s): a419422

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,8 +8,8 @@ OpenAssistant-Llama-30B-4-bit working with GPTQ versions used in Oobabooga's Tex
8
  <P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations</P>
9
  <P>GGML: 3 quantized versions. One quantized using q4_1, another one was quantized using q5_0, and the last one was quantized using q5_1.</P>
10
 
11
- <p><strong><font size="5">Update 05.19.2023</font></strong></p>
12
- <p>Updated the ggml quantizations to be compatible with the latest version of llamacpp.</p>
13
 
14
  <p><strong><font size="5">Update 04.29.2023</font></strong></p>
15
  <p>Updated to the latest fine-tune by Open Assistant <a href="https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor">oasst-sft-7-llama-30b-xor</a>.</p>
 
8
  <P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations</P>
9
  <P>GGML: 3 quantized versions. One quantized using q4_1, another one was quantized using q5_0, and the last one was quantized using q5_1.</P>
10
 
11
+ <p><strong><font size="5">Update 05.27.2023</font></strong></p>
12
+ <p>Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).</p>
13
 
14
  <p><strong><font size="5">Update 04.29.2023</font></strong></p>
15
  <p>Updated to the latest fine-tune by Open Assistant <a href="https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor">oasst-sft-7-llama-30b-xor</a>.</p>