TheBloke commited on
Commit
e7f7d5b
1 Parent(s): 219564e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -18,7 +18,7 @@ pipeline_tag: text-generation
18
 
19
  # Manticore 13B GGML
20
 
21
- This is GGML format quantised 4bit and 5bit models of [OpenAccess AI Collective's Manticore 13B](https://huggingface.co/openaccess-ai-collective/manticore-13b).
22
 
23
  This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
24
 
@@ -36,6 +36,12 @@ I have quantised the GGML files in this repo with the latest version. Therefore
36
 
37
  For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
38
 
 
 
 
 
 
 
39
  ## Provided files
40
  | Name | Quant method | Bits | Size | RAM required | Use case |
41
  | ---- | ---- | ---- | ---- | ---- | ----- |
 
18
 
19
  # Manticore 13B GGML
20
 
21
+ This is GGML format quantised 4-bit, 5-bit and 8-bit models of epoch 3 of [OpenAccess AI Collective's Manticore 13B](https://huggingface.co/openaccess-ai-collective/manticore-13b).
22
 
23
  This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
24
 
 
36
 
37
  For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
38
 
39
+ ## Epoch
40
+
41
+ The files in the `main` branch are from Epoch 3 of Manticore 13B, as of May 19th.
42
+
43
+ The files in the `previous_llama_ggmlv2` branch are from Epoch 1.
44
+
45
  ## Provided files
46
  | Name | Quant method | Bits | Size | RAM required | Use case |
47
  | ---- | ---- | ---- | ---- | ---- | ----- |