Felix Friedrich's picture

Felix Friedrich

felfri

AI & ML interests

GenAI safety/fairness & model steerability

Recent Activity

Organizations

LAION eV's profile picture Artificial Intelligence & Machine Learning Lab at TU Darmstadt's profile picture leditsplusplus's profile picture Evaluating Social Impacts of Generative AI's profile picture Aurora-M's profile picture Ontocord.AI's profile picture Social Post Explorers's profile picture anonymousy7W4's profile picture

Posts 1

view post
Post
2272
πŸš€ Excited to announce the release of our new research paper, "LLAVAGUARD: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment"!
In this work, we introduce LLAVAGUARD, a family of cutting-edge Vision-Language Model (VLM) judges designed to enhance the safety and integrity of vision datasets and generative models. Our approach leverages flexible policies for assessing safety in diverse settings. This context awareness ensures robust data curation and model safeguarding alongside comprehensive safety assessments, setting a new standard for vision datasets and models. We provide three versions (7B, 13B, and 34B) and our data, see below. This achievement wouldn't have been possible without the incredible teamwork and dedication of my great colleagues @LukasHug , @PSaiml , @mbrack . πŸ™ Together, we've pushed the boundaries of what’s possible at the intersection of large generative models and safety.
πŸ” Dive into our paper to explore:
Innovative methodologies for dataset curation and model safeguarding.
State-of-the-art safety assessments.
Practical implications for AI development and deployment.
Find more at AIML-TUDA/llavaguard-665b42e89803408ee8ec1086 and https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html