Spaces:
Running
Running
File size: 11,779 Bytes
40559c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
<!--
@license
Copyright 2020 Google. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="apple-touch-icon" sizes="180x180" href="https://pair.withgoogle.com/images/favicon/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="https://pair.withgoogle.com/images/favicon/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="https://pair.withgoogle.com/images/favicon/favicon-16x16.png">
<link rel="mask-icon" href="https://pair.withgoogle.com/images/favicon/safari-pinned-tab.svg" color="#00695c">
<link rel="shortcut icon" href="https://pair.withgoogle.com/images/favicon.ico">
<script>
!(function(){
var url = window.location.href
if (url.split('#')[0].split('?')[0].slice(-1) != '/' && !url.includes('.html')) window.location = url + '/'
})()
</script>
<title>Why Some Models Leak Data</title>
<meta property="og:title" content="Why Some Models Leak Data">
<meta property="og:url" content="https://pair.withgoogle.com/explorables/data-leak/">
<meta name="og:description" content="Machine learning models use large amounts of data, some of which can be sensitive. If they're not trained correctly, sometimes that data is inadvertently revealed.">
<meta property="og:image" content="https://pair.withgoogle.com/explorables/images/model-inversion.png">
<meta name="twitter:card" content="summary_large_image">
<link rel="stylesheet" type="text/css" href="../style.css">
<link href='https://fonts.googleapis.com/css?family=Roboto+Slab:400,500,700|Roboto:700,500,300' rel='stylesheet' type='text/css'>
<link href="https://fonts.googleapis.com/css?family=Google+Sans:400,500,700" rel="stylesheet">
<meta name="viewport" content="width=device-width">
</head>
<body>
<div class='header'>
<div class='header-left'>
<a href='https://pair.withgoogle.com/'>
<img src='../images/pair-logo.svg' style='width: 100px'></img>
</a>
<a href='../'>Explorables</a>
</div>
</div>
<h1 class='headline'>Why Some Models Leak Data</h1>
<div class="post-summary">Machine learning models use large amounts of data, some of which can be sensitive. If they're not trained correctly, sometimes that data is inadvertently revealed.</div>
<link rel="stylesheet" href="style.css">
<p>Let’s take a look at a game of soccer. </p>
<link rel="stylesheet" href="style.css">
<div id='field-grass' class='field'></div>
<p><br></br> </p>
<p>Using the position of each player as training data, we can teach a model to predict which team would get to a loose ball first at each spot on the field, indicated by the color of the pixel.</p>
<div id='field-prediction' class='field'></div>
<p>It updates in real-time—drag the players around to see the model change.</p>
<p><br></br> </p>
<p>This model reveals quite a lot about the data used to train it. Even without the actual positions of the players, it is simple to see where players might be. </p>
<div id='field-playerless' class='field'></div>
<p>Click this button to <span class="button" id="player-button">move the players</span> </p>
<p>Take a guess at where the yellow team’s goalie is now, then check their actual position. How close were you?</p>
<h3>Sensitive Salary Data</h3>
<p>In this specific soccer example, being able to make educated guesses about the data a model was trained on doesn’t matter too much. But what if our data points represent something more sensitive?</p>
<div id='field-scatter' class='field'></div>
<p>We’ve fed the same numbers into the model, but now they represent salary data instead of soccer data. Building models like this is a common technique to <a href="https://www.eeoc.gov/laws/guidance/section-10-compensation-discrimination#c.%20Using%20More%20Sophisticated%20Statistical%20Techniques%20to%20Evaluate">detect discrimination</a>. A union might test if a company is paying men and women fairly by building a salary model that takes into account years of experience. They can then <a href="https://postguild.org/2019-pay-study/">publish</a> the results to bring pressure for change or show improvement.</p>
<p>In this hypothetical salary study, even though no individual salaries have been published, it is easy to infer the salary of the newest male hire. And carefully cross referencing public start dates on LinkedIn with the model could almost perfectly reveal everyone’s salary.</p>
<p>Because the model here is so flexible (there are hundreds of square patches with independently calculated predictions) and we have so few data points (just 22 people), it is able to “memorize” individual data points. If we’re looking to share information about patterns in salaries, a simpler and more constrained model like a linear regression might be more appropriate. </p>
<div id='field-regression' class='field'></div>
<p>By boiling down the 22 data points to two lines we’re able to see broad trends without being able to guess anyone’s salary.</p>
<h3>Subtle Leaks</h3>
<p>Removing complexity isn’t a complete solution though. Depending on how the data is distributed, even a simple line can inadvertently reveal information.</p>
<div id='field-regression-leak' class='field'></div>
<p>In this company, almost all the men started several years ago, so the slope of the line is especially sensitive to the salary of the new hire. </p>
<p>Is their salary <span class="button" id="high-button">higher or lower</span> than average? Based on the line, we can make a pretty good guess.</p>
<p>Notice that changing the salary of someone with a more common tenure barely moves the line. In general, more typical data points are less susceptible to being leaked. This sets up a tricky trade off: we want models to learn about edge cases while being sure they haven’t memorized individual data points.</p>
<h3>Real World Data</h3>
<p>Models of real world data are often quite complex—this can improve accuracy, but makes them <a href="https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html">more susceptible</a> to unexpectedly leaking information. Medical models have inadvertently revealed <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4827719/">patients’ genetic markers</a>. Language models have memorized <a href="https://bair.berkeley.edu/blog/2019/08/13/memorization/">credit card numbers</a>. Faces can even be <a href="https://rist.tech.cornell.edu/papers/mi-ccs.pdf">reconstructed</a> from image models: </p>
<div class='face-container'><img src='face.png'></div>
<p><a href="https://rist.tech.cornell.edu/papers/mi-ccs.pdf">Fredrikson et al</a> were able to extract the image on the left by repeatedly querying a facial recognition API. It isn’t an exact match with the individual’s actual face (on the right), but this attack only required access to the model’s predictions, not its internal state. </p>
<h3>Protecting Private Data</h3>
<p>Training models with <a href="http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html">differential privacy</a> stops the training data from leaking by limiting how much the model can learn from any one data point. Differentially private models are still at the cutting edge of research, but they’re being packaged into <a href="https://blog.tensorflow.org/2019/03/introducing-tensorflow-privacy-learning.html">machine learning frameworks</a>, making them much easier to use. When it isn’t possible to train differentially private models, there are also tools that can <a href="https://github.com/tensorflow/privacy/tree/master/tensorflow_privacy/privacy/membership_inference_attack">measure</a> how much data is the model memorizing. Also, standard techniques such as aggregation and limiting how much data a single source can contribute are still useful and usually improve the privacy of the model.</p>
<p>As we saw in the <a href="https://pair.withgoogle.com/explorables/anonymization/">Collecting Sensitive Information Explorable</a>, adding enough random noise with differential privacy to protect outliers like the new hire can increase the amount of data required to reach a good level of accuracy. Depending on the application, the constraints of differential privacy could even improve the model—for instance, not learning too much from one data point can help prevent <a href="https://openreview.net/forum?id=r1xyx3R9tQ">overfitting</a>. </p>
<p>Given the increasing utility of machine learning models for many real-world tasks, it’s clear that more and more systems, devices and apps will be powered, to some extent, by machine learning in the future. While <a href="https://owasp.org/www-project-top-ten/">standard privacy best practices</a> developed for non-machine learning systems still apply to those with machine learning, the introduction of machine learning introduces new challenges, including the ability of the model to memorize some specific training data points and thus be vulnerable to privacy attacks that seek to extract this data from the model. Fortunately, techniques such as differential privacy exist that can be helpful in overcoming this specific challenge. Just as with other areas of <a href="https://ai.google/responsibilities/responsible-ai-practices/">Responsible AI</a>, it’s important to be aware of these new challenges that come along with machine learning and what steps can be taken to mitigate them. </p>
<h3>Credits</h3>
<p>Adam Pearce and Ellen Jiang // December 2020</p>
<p>Thanks to Andreas Terzis, Ben Wedin, Carey Radebaugh, David Weinberger, Emily Reif, Fernanda Viégas, Hal Abelson, Kristen Olson, Martin Wattenberg, Michael Terry, Miguel Guevara, Thomas Steinke, Yannick Assogba, Zan Armstrong and our other colleagues at Google for their help with this piece.</p>
<h3>More Explorables</h3>
<p id='recirc'></p>
<script src='../third_party/d3_.js'></script>
<script src='../third_party/simple-statistics.min.js'></script>
<script src='players0.js'></script>
<script src='script.js'></script>
<script src='../third_party/recirc.js'></script>
</body>
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-138505774-1"></script>
<script>
if (window.location.origin === 'https://pair.withgoogle.com'){
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-138505774-1');
}
</script>
<script>
// Tweaks for displaying in an iframe
if (window !== window.parent){
// Open links in a new tab
Array.from(document.querySelectorAll('a'))
.forEach(e => {
// skip anchor links
if (e.href && e.href[0] == '#') return
e.setAttribute('target', '_blank')
e.setAttribute('rel', 'noopener noreferrer')
})
// Remove recirc h3
Array.from(document.querySelectorAll('h3'))
.forEach(e => {
if (e.textContent != 'More Explorables') return
e.parentNode.removeChild(e)
})
// Remove recirc container
var recircEl = document.querySelector('#recirc')
recircEl.parentNode.removeChild(recircEl)
}
</script>
</html> |