simp_demo / configs /measuringforgetting.yaml
Avijit Ghosh
privacy affiliations
da54ed4
raw
history blame contribute delete
No virus
2 kB
Abstract: "Machine learning models exhibit two seemingly contradictory phenomena: training data memorization, and various forms of forgetting. In memorization, models overfit specific training examples and become susceptible to privacy attacks. In forgetting, examples which appeared early in training are forgotten by the end. In this work, we connect these phenomena. We propose a technique to measure to what extent models \"forget\" the specifics of training examples, becoming less susceptible to privacy attacks on examples they have not seen recently. We show that, while non-convex models can memorize data forever in the worst-case, standard image, speech, and language models empirically do forget examples over time. We identify nondeterminism as a potential explanation, showing that deterministically trained models do not forget. Our results suggest that examples seen early when training with extremely large datasets - for instance those examples used to pre-train a model - may observe privacy benefits at the expense of examples seen later."
Applicable Models:
- ResNet (Image)
- Conformer (Audio)
- T5 (Text)
Authors: Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang
Considerations: .nan
Datasets: .nan
Group: PrivacyEvals
Hashtags: .nan
Link: 'Measuring Forgetting of Memorized Training Examples'
Modality: Text + Image + Audio
Screenshots:
- Images/Forgetting1.png
- Images/Forgetting2.png
Suggested Evaluation: Measuring forgetting of training examples
Level: Model
URL: https://arxiv.org/pdf/2207.00099.pdf
What it is evaluating: Measure whether models forget training examples over time, over different types of models (image, audio, text) and how order of training affects privacy attacks
Metrics: .nan
Affiliations: Google, University of Pennsylvania, Cornell University, University of California, Berkeley, University of Toronto
Methodology: .nan