Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
gsartiΒ 
posted an update Feb 16
Post
πŸ” Today's pick in Interpretability & Analysis of LMs: Recovering the Pre-Fine-Tuning Weights of Generative Models by @eliahu , J. Kahana, Y. Hoshen

Using low-rank adapters (LoRA) is nowadays a common practice to fine-tune pre-trained generative models on specific tasks, or align them to human preferences.

This work explores pre-fine tuning weight recovery: given a set of LoRA models with merged weights fine-tuned from the same pre-trained system, the task is to recover the original (unknown) weights of the pre-trained model.

Authors propose SpectralDeTuning, a method framing this task as an optimisation problem alternating a step of approximation for all low-rank tuned matrices using SVD and the closed-form computation of the optimal pre-trained matrix given the approximate low-rank ones.

The LoRA Weight Recovery Attack (LoWRA) benchmark is introduced to evaluate pre-fine tuning weight recovery across language and vision tasks on ViT, Mistral and Stable Diffusion models.

The SpectralDeTuning method is shown to be effective in recovering original models both intrinsically (difference in weights) and behavirally (similar outputs). The main limitations of the approach are the assumption that the rank used by LoRAs is known by the attacker, and the relatively high number of LoRAs needed to provide a good approximation.

πŸ“„ Paper: Recovering the Pre-Fine-Tuning Weights of Generative Models (2402.10208)

πŸ’» LoWRA Bench: Eliahu/LoWRA-Bench

πŸ” All daily picks in LM interpretability: gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9
In this post