Spaces:
Running
Running
Upload app.py with huggingface_hub
Browse files
app.py
CHANGED
|
@@ -108,21 +108,20 @@ with gr.Blocks(css="""
|
|
| 108 |
gr.Markdown("""
|
| 109 |
# DFBench: The Image Deepfake Detection Benchmark 2025
|
| 110 |
|
| 111 |
-
DFBench provides a standardized evaluation
|
| 112 |
This leaderboard focuses on image deepfake detection, e.g. the output of text-to-image and image-to-image models.
|
| 113 |
|
| 114 |
**Objectives:**
|
| 115 |
-
- Allow fair comparison between deepfake detection models on unseen test data
|
| 116 |
- Advance the state-of-the-art in synthetic media identification
|
| 117 |
|
| 118 |
-
This benchmark serves the academic and industry research community by providing consistent evaluation standards for deepfake detection methodologies.
|
| 119 |
""")
|
| 120 |
|
| 121 |
with gr.Tab("Leaderboard"):
|
| 122 |
gr.Markdown("## Leaderboard Image Deepfake Detection")
|
| 123 |
gr.HTML(leaderboard_view())
|
| 124 |
gr.Markdown("""
|
| 125 |
-
*The Leaderboard is updated upon validation of new submissions. All results are evaluated
|
| 126 |
""")
|
| 127 |
|
| 128 |
with gr.Tab("Submission Guidelines"):
|
|
@@ -134,7 +133,7 @@ This benchmark serves the academic and industry research community by providing
|
|
| 134 |
The test dataset comprises **2,920 images**. The test data is unlabeled. Each image is either:
|
| 135 |
- **Real:** An authentic, unmodified image
|
| 136 |
- **Fake:** AI-generated or synthetically modified content
|
| 137 |
-
|
| 138 |
---
|
| 139 |
|
| 140 |
## Submission Requirements
|
|
@@ -157,15 +156,13 @@ filename,label
|
|
| 157 |
### Submission Process
|
| 158 |
|
| 159 |
1. Generate predictions for all 2,920 test images
|
| 160 |
-
2. Format results according to specification above
|
| 161 |
3. Send your CSV file submission to: **submission@df-bench.com**. The name of the file should correspond to the leaderboard model name, e.g. `Model_This_name.csv` will be included as `Model This name` in the leaderboard.
|
| 162 |
|
| 163 |
### Evaluation Timeline
|
| 164 |
- Submissions are processed within 5-7 business days
|
| 165 |
- Approved submissions are added to the public leaderboard
|
| 166 |
|
| 167 |
-
---
|
| 168 |
-
|
| 169 |
## Notes
|
| 170 |
- Each research group may submit one set of scores per month
|
| 171 |
- All submissions undergo automated validation before leaderboard inclusion
|
|
|
|
| 108 |
gr.Markdown("""
|
| 109 |
# DFBench: The Image Deepfake Detection Benchmark 2025
|
| 110 |
|
| 111 |
+
DFBench provides a standardized evaluation for computer vision deepfake detection systems.
|
| 112 |
This leaderboard focuses on image deepfake detection, e.g. the output of text-to-image and image-to-image models.
|
| 113 |
|
| 114 |
**Objectives:**
|
| 115 |
+
- Allow fair comparison between deepfake detection models on unseen test data (no fine tuning on the test data possible)
|
| 116 |
- Advance the state-of-the-art in synthetic media identification
|
| 117 |
|
|
|
|
| 118 |
""")
|
| 119 |
|
| 120 |
with gr.Tab("Leaderboard"):
|
| 121 |
gr.Markdown("## Leaderboard Image Deepfake Detection")
|
| 122 |
gr.HTML(leaderboard_view())
|
| 123 |
gr.Markdown("""
|
| 124 |
+
*The Leaderboard is updated upon validation of new submissions. All results are evaluated on the official [test dataset](https://huggingface.co/datasets/DFBench/Image-Deepfake-Detection-25).*
|
| 125 |
""")
|
| 126 |
|
| 127 |
with gr.Tab("Submission Guidelines"):
|
|
|
|
| 133 |
The test dataset comprises **2,920 images**. The test data is unlabeled. Each image is either:
|
| 134 |
- **Real:** An authentic, unmodified image
|
| 135 |
- **Fake:** AI-generated or synthetically modified content
|
| 136 |
+
Since there are no labels, you cannot (and should not) train your model on the test data.
|
| 137 |
---
|
| 138 |
|
| 139 |
## Submission Requirements
|
|
|
|
| 156 |
### Submission Process
|
| 157 |
|
| 158 |
1. Generate predictions for all 2,920 test images
|
| 159 |
+
2. Format results according to specification above
|
| 160 |
3. Send your CSV file submission to: **submission@df-bench.com**. The name of the file should correspond to the leaderboard model name, e.g. `Model_This_name.csv` will be included as `Model This name` in the leaderboard.
|
| 161 |
|
| 162 |
### Evaluation Timeline
|
| 163 |
- Submissions are processed within 5-7 business days
|
| 164 |
- Approved submissions are added to the public leaderboard
|
| 165 |
|
|
|
|
|
|
|
| 166 |
## Notes
|
| 167 |
- Each research group may submit one set of scores per month
|
| 168 |
- All submissions undergo automated validation before leaderboard inclusion
|