apepkuss79 commited on
Commit
0383971
·
verified ·
1 Parent(s): b0801bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -30,7 +30,7 @@ prompt template: `phi-3-chat`
30
 
31
  **Context size**
32
 
33
- chat_ctx_size: `3072`
34
 
35
  **Run with GaiaNet**
36
 
@@ -56,4 +56,4 @@ chat_ctx_size: `3072`
56
  | [Phi-3-mini-128k-instruct-Q8_0.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q8_0.gguf) | Q8_0 | 8 | 4.06 GB| very large, extremely low quality loss - not recommended |
57
  | [Phi-3-mini-128k-instruct-f16.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-f16.gguf) | f16 | 16 | 7.64 GB| |
58
 
59
- *Quantized with llama.cpp b2961.*
 
30
 
31
  **Context size**
32
 
33
+ chat_ctx_size: `128000`
34
 
35
  **Run with GaiaNet**
36
 
 
56
  | [Phi-3-mini-128k-instruct-Q8_0.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q8_0.gguf) | Q8_0 | 8 | 4.06 GB| very large, extremely low quality loss - not recommended |
57
  | [Phi-3-mini-128k-instruct-f16.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-f16.gguf) | f16 | 16 | 7.64 GB| |
58
 
59
+ *Quantized with llama.cpp b3333.*