Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -4,6 +4,10 @@ pipeline_tag: image-text-to-text
|
|
| 4 |
---
|
| 5 |
# TRUST-VL Model Card
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
## Model Details
|
| 8 |
|
| 9 |
TRUST-VL is a unified and explainable vision-language model for general multimodal misinformation detection. It incorporates a novel Question-Aware Visual Amplifier module, designed to extract task-specific visual features. To support training, we also construct TRUST-Instruct, a large-scale instruction dataset containing 198K samples featuring structured reasoning chains aligned with human fact-checking workflows. Extensive experiments on both in-domain and zero-shot benchmarks demonstrate that TRUST-VL achieves state-of-the-art performance, while also offering strong generalization and interpretability.
|
|
|
|
| 4 |
---
|
| 5 |
# TRUST-VL Model Card
|
| 6 |
|
| 7 |
+
<div align="center">
|
| 8 |
+
<img src="https://github.com/YanZehong/TRUST-VL/blob/main/images/trust-vl-logo.png" width="60%" alt="TRUST-VL" />
|
| 9 |
+
</div>
|
| 10 |
+
|
| 11 |
## Model Details
|
| 12 |
|
| 13 |
TRUST-VL is a unified and explainable vision-language model for general multimodal misinformation detection. It incorporates a novel Question-Aware Visual Amplifier module, designed to extract task-specific visual features. To support training, we also construct TRUST-Instruct, a large-scale instruction dataset containing 198K samples featuring structured reasoning chains aligned with human fact-checking workflows. Extensive experiments on both in-domain and zero-shot benchmarks demonstrate that TRUST-VL achieves state-of-the-art performance, while also offering strong generalization and interpretability.
|