LukasHug commited on
Commit
c3093aa
·
verified ·
1 Parent(s): e41612d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -3
README.md CHANGED
@@ -24,14 +24,27 @@ base_model:
24
  - lmms-lab/llava-onevision-qwen2-7b-ov
25
  ---
26
 
27
- This LlavaGuard model was introduced in [LLAVAGUARD: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment](https://arxiv.org/abs/2406.05113). Please also check out our [Website](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html).
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Overview
30
  We here provide the [SGLang](https://github.com/sgl-project/sglang) weights for LlavaGuard v1.2 7B.
31
  It builds upon LLaVA-OneVision 7B and has achieved the best overall performance so far with improved reasoning capabilities within the rationales.
32
- This version is not compatible with the HF transformer implementation and must be used with SGLang or LLaVA implementation. For HF version please see AIML-TUDA/LlavaGuard-v1.2-7B-OV-HF.
33
  The model is also compatible with LoRA tuning as well as full fine-tuning.
34
- For tuning, you can adopt and use the training scripts provided in our repository (see https://github.com/ml-research/LlavaGuard).
35
  A suitable docker image can be found at our Github repo, too.
36
 
37
  #### Usage
 
24
  - lmms-lab/llava-onevision-qwen2-7b-ov
25
  ---
26
 
27
+
28
+
29
+ ## Model Summary
30
+ LlavaGuard-v1.2-7B-OV is trained on [LlavaGuard-DS](https://huggingface.co/datasets/AIML-TUDA/LlavaGuard) and based on llava-onevision-qwen2-7b-ov model with a context window of 32K tokens.
31
+
32
+ - Links to Model Versions: [sglang](https://huggingface.co/datasets/AIML-TUDA/LlavaGuard-v1.2-7B-OV), [tranformers](https://huggingface.co/datasets/AIML-TUDA/LlavaGuard-v1.2-7B-OV-HF)
33
+ - Repository: [ml-research/LlavaGuard](https://github.com/ml-research/LlavaGuard)
34
+ - Project Website: [LlavaGuard](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html)
35
+ - Paper: [LlavaGuard-Arxiv](https://arxiv.org/abs/2406.05113)
36
+
37
+ ## Model Compatability
38
+
39
+ - Inference: SGLang✅, LLaVA [repo](https://github.com/LLaVA-VL/LLaVA-NeXT)✅, HF Tranformers❌
40
+ - Model Tuning:✅
41
 
42
  ## Overview
43
  We here provide the [SGLang](https://github.com/sgl-project/sglang) weights for LlavaGuard v1.2 7B.
44
  It builds upon LLaVA-OneVision 7B and has achieved the best overall performance so far with improved reasoning capabilities within the rationales.
45
+ This version is not compatible with the HF transformer implementation and must be used with SGLang or LLaVA implementation.
46
  The model is also compatible with LoRA tuning as well as full fine-tuning.
47
+ For tuning, you can adopt and use the training scripts provided in our repository (see [ml-research/LlavaGuard](https://github.com/ml-research/LlavaGuard)).
48
  A suitable docker image can be found at our Github repo, too.
49
 
50
  #### Usage