ZeroWw commited on
Commit
3a28e73
1 Parent(s): aa1ed19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,6 +13,6 @@ Result:
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
  and they perform as well as the pure f16.
15
 
16
- Note:
17
- as of now, to run this model you must use: https://github.com/mnlife/llama.cpp
18
- Later on the PR will be added to the main branch of llama.cpp
 
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
  and they perform as well as the pure f16.
15
 
16
+ Note:
17
+ as of now, to run this model you must use: https://github.com/mnlife/llama.cpp
18
+ Later on the PR will be added to the main branch of llama.cpp