βοΈπ¦ Provenance, Watermarking & Deepfake Detection
Technical tools for more control over non-consensual synthetic content
Runningπ’Note Example of experimental audio watermarking research.
Sleeping88π§A Watermark for LLMs
Note Imperceptibly marks content generated by an LLM as being synthetic.
Sleeping3πFawkes
Note Fawkes: Image "poisoning", which disrupts the ability to create facial recognition models.
Runtime error12πImage Watermarking for Stable Diffusion XL
Note From Imatag: Robustly mark images as your own.
Sleeping38πWatermarked Content Credentials
Note From Truepic: Sign your images with C2PA Content Credentials.
Sleeping18πGenAI with Content Credentials
Note From Truepic: Watermark your images with pointers to original C2PA Content Credentials.
Sleeping38π‘Photoguard
Note From authors of Photoguard: Image "guarding", which makes an image immune to direct editing by generative models.
Runtime error46π’Photoguard
Safeguard images against ML-based photo manipulation
Note Community Contribution of Photoguard: Image "guarding", which makes an image immune to direct editing by generative models.
Running on A10G4π³Dendrokronos
Note Community Contribution of the University of Maryland's image watermark for diffusion models.
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Paper β’ 2302.04222 β’ Published β’ 1Robust Image Watermarking using Stable Diffusion
Paper β’ 2401.04247 β’ PublishedThree Bricks to Consolidate Watermarks for Large Language Models
Paper β’ 2308.00113 β’ Published β’ 13- Running5π
CNN Deepfake Image Detection