Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
license: bsd-3-clause
|
| 2 |
+
library_name: pytorch
|
| 3 |
+
pipeline_tag: image-classification
|
| 4 |
+
tags:
|
| 5 |
+
- facial-forgery-detection
|
| 6 |
+
- multi-label-classification
|
| 7 |
+
- vit
|
| 8 |
+
- deepfake
|
| 9 |
+
- acl-2026
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Face-ViT: Multi-Label Facial Forgery Region Classifier
|
| 13 |
+
|
| 14 |
+
## 📖 Model Description
|
| 15 |
+
This is the **Face-ViT** auxiliary perception module proposed in the ACL 2026 paper:
|
| 16 |
+
*"Generating Attribution Reports for Manipulated Facial Images: A Dataset and Baseline"*.
|
| 17 |
+
|
| 18 |
+
Face-ViT is a multi-label classifier based on the **ViT-H/14** architecture. It is specifically trained to recognize 21 different types of facial manipulations (e.g., eye modification, skin smoothing, mouth tampering). In the DFF framework, it provides fine-grained visual cues that guide the large language model to generate accurate forensic explanations.
|
| 19 |
+
|
| 20 |
+
## 🛠️ Model Details
|
| 21 |
+
- **Architecture**: ViT-H/14 with an additional CNN branch and max-pooling for multi-label support.
|
| 22 |
+
- **Input Size**: 224x224 RGB images.
|
| 23 |
+
- **Number of Classes**: 21 (Facial attributes/manipulation types).
|
| 24 |
+
- **Training Objective**: Joint loss including BCE, Focal, Dice, and Jaccard loss.
|
| 25 |
+
|
| 26 |
+
## 🚀 Links
|
| 27 |
+
- **Official Code**: [Generating-Attribution-Reports](https://github.com/NattyLianJc/Generating-Attribution-Reports)
|
| 28 |
+
- **Main Framework (DFF)**: [LianJC/DFF-InstructBLIP-Detection](https://huggingface.co/LianJC/DFF-InstructBLIP-Detection)
|
| 29 |
+
- **Dataset (MMTT)**: [LianJC/MMTT-Dataset](https://huggingface.co/datasets/LianJC/MMTT-Dataset)
|
| 30 |
+
|
| 31 |
+
## 📜 Citation
|
| 32 |
+
If you find this model useful, please cite:
|
| 33 |
+
```bibtex
|
| 34 |
+
@inproceedings{lian2026generating,
|
| 35 |
+
title={Generating Attribution Reports for Manipulated Facial Images: A Dataset and Baseline},
|
| 36 |
+
author={Lian, Jingchun and others},
|
| 37 |
+
booktitle={Proceedings of ACL},
|
| 38 |
+
year={2026},
|
| 39 |
+
note={To appear}
|
| 40 |
+
}
|