Papers
arxiv:2501.09284

SEAL: Entangled White-box Watermarks on Low-Rank Adaptation

Published on Jan 16
· Submitted by BootsofLagrangian on Jan 21
#3 Paper of the day
Authors:
,
,

Abstract

Recently, LoRA and its variants have become the de facto strategy for training and sharing task-specific versions of large pretrained models, thanks to their efficiency and simplicity. However, the issue of copyright protection for LoRA weights, especially through watermark-based techniques, remains underexplored. To address this gap, we propose SEAL (SEcure wAtermarking on LoRA weights), the universal whitebox watermarking for LoRA. SEAL embeds a secret, non-trainable matrix between trainable LoRA weights, serving as a passport to claim ownership. SEAL then entangles the passport with the LoRA weights through training, without extra loss for entanglement, and distributes the finetuned weights after hiding the passport. When applying SEAL, we observed no performance degradation across commonsense reasoning, textual/visual instruction tuning, and text-to-image synthesis tasks. We demonstrate that SEAL is robust against a variety of known attacks: removal, obfuscation, and ambiguity attacks.

Community

Paper author Paper submitter

We introduce SEAL, a robust watermarking scheme designed to safeguard adaptation weights like LoRA. As is well known, fully fine-tuning large models (whether language or diffusion) is computationally expensive, which is why parameter-efficient finetuning (PEFT) methods have become so prevalent. In particular, LoRA weights are widely shared on open-source repositories such as Hugging Face and CivitAI, fueling a vibrant community of creators who deserve reliable copyright protection. To address this, SEAL provides a seamless watermarking mechanism that can integrate effortlessly with existing PEFT libraries, including offshoots like LyCORIS, ensuring minimal overhead for adopters. While our code release will be delayed for certain practical reasons, we plan to make it available soon. Thank you for your interest.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.09284 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.09284 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.09284 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.