IHaveNoClueAndIMustPost commited on
Commit
94d7beb
1 Parent(s): 4cb0fbe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -7,6 +7,8 @@ tags:
7
  - text-generation-inference
8
  ---
9
  This is [Llama2-22b](https://huggingface.co/chargoddard/llama2-22b) in a couple of GGML formats. I have no idea what I'm doing so if something doesn't work as it should or not at all that's likely on me, not the models themselves.<br>
 
 
10
  While I haven't had any issues so far do note that the original repo states <i>"Not intended for use as-is - this model is meant to serve as a base for further tuning"</b>.
11
 
12
  Approximate VRAM requirements at 4K context:
@@ -38,4 +40,4 @@ Approximate VRAM requirements at 4K context:
38
  <td style='text-align: center'>14.5GB</td>
39
  </tr>
40
  </tbody>
41
- </table>
 
7
  - text-generation-inference
8
  ---
9
  This is [Llama2-22b](https://huggingface.co/chargoddard/llama2-22b) in a couple of GGML formats. I have no idea what I'm doing so if something doesn't work as it should or not at all that's likely on me, not the models themselves.<br>
10
+ A second model merge has been [released](https://huggingface.co/chargoddard/llama2-22b-blocktriangular) and the GGML conversions for that can be found [here](https://huggingface.co/IHaveNoClueAndIMustPost/llama2-22b-blocktriangular-GGML).
11
+
12
  While I haven't had any issues so far do note that the original repo states <i>"Not intended for use as-is - this model is meant to serve as a base for further tuning"</b>.
13
 
14
  Approximate VRAM requirements at 4K context:
 
40
  <td style='text-align: center'>14.5GB</td>
41
  </tr>
42
  </tbody>
43
+ </table>