Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,10 @@
|
|
1 |
Base PR: https://github.com/ggerganov/llama.cpp/pull/5267
|
2 |
-
Note: integration in llama.cpp is partial at this point - it uses llava-1.5 preprocessing
|
|
|
|
|
|
|
|
|
|
|
3 |
|
4 |
The mmproj files are the embedded ViT's that came with llava-1.6, I've not compared them but given the previous releases from the team I'd be surprised if the ViT has not been fine tuned this time.
|
5 |
If that's the case, using another ViT can cause issues.
|
|
|
1 |
Base PR: https://github.com/ggerganov/llama.cpp/pull/5267
|
2 |
+
Note: integration in llama.cpp is partial at this point - it uses llava-1.5 preprocessing without applying that PR
|
3 |
+
|
4 |
+
Important: Verify that processing a simple question with any image at least uses 1200 tokens of prompt processing, that shows that the new PR is in use.
|
5 |
+
If your prompt is just 576 + a few tokens, you are using llava-1.5 code (or projector) and this is incompatible with llava-1.6
|
6 |
+
|
7 |
+
|
8 |
|
9 |
The mmproj files are the embedded ViT's that came with llava-1.6, I've not compared them but given the previous releases from the team I'd be surprised if the ViT has not been fine tuned this time.
|
10 |
If that's the case, using another ViT can cause issues.
|