cmp-nct osanseviero HF staff commited on
Commit
4d811ed
1 Parent(s): d32b12d

Add VLLM tag (#6)

Browse files

- Add VLLM tag (a26b35e8d5d25458cf18d2b53e5b566d0d8a3dd0)


Co-authored-by: Omar Sanseviero <osanseviero@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
4
  Update: PR is merged, llama.cpp now natively supports these models
5
  Important: Verify that processing a simple question with any image at least uses 1200 tokens of prompt processing, that shows that the new PR is in use.
@@ -11,5 +12,4 @@ If your prompt is just 576 + a few tokens, you are using llava-1.5 code (or proj
11
  The mmproj files are the embedded ViT's that came with llava-1.6, I've not compared them but given the previous releases from the team I'd be surprised if the ViT has not been fine tuned this time.
12
  If that's the case, using another ViT can cause issues.
13
 
14
- Original models: https://github.com/haotian-liu/LLaVA
15
-
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
  ---
5
  Update: PR is merged, llama.cpp now natively supports these models
6
  Important: Verify that processing a simple question with any image at least uses 1200 tokens of prompt processing, that shows that the new PR is in use.
 
12
  The mmproj files are the embedded ViT's that came with llava-1.6, I've not compared them but given the previous releases from the team I'd be surprised if the ViT has not been fine tuned this time.
13
  If that's the case, using another ViT can cause issues.
14
 
15
+ Original models: https://github.com/haotian-liu/LLaVA