ViT for NSFW classification
Model info
This is Google's vit-base-patch16-224-in21k finetuned for flagging images according to vndb.org with 3 classes:
- safe
- suggestive
- explicit
Training data
The model was trained on the vndb.org database dump
using full size screenshots (sf
in the database dump).
The dataset can be loaded from carbon225/vndb_img.
Intended use
The model can be used for flagging anime-style images for sexual content. It can also be finetuned on other tasks related to anime images.
- Downloads last month
- 43
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.