dhuynh95 commited on
Commit
d6fb6fb
1 Parent(s): b28677a
Files changed (1) hide show
  1. app.py +10 -10
app.py CHANGED
@@ -20,11 +20,11 @@ df = df[["content"]].iloc[:50]
20
  title = "<h1 style='text-align: center; color: #333333; font-size: 40px;'> 🤔 StarCoder Memorization Checker"
21
 
22
  description = """
23
- This ability of LLMs to learn their training set by heart can pose huge privacy issues, as many large scale Conversational AI available commercially collect users data at scale and fine-tune their models on it.
24
- This means that if sensitive data is sent and memorized by an AI, other users' can willingly or unwillingly prompt the AI to spit out this sensitive data.
25
 
26
- To raise awareness of this issue, we show in this demo how much [StarCoder](https://huggingface.co/bigcode/starcoder), an LLM specializd in coding tasks, has memorized its training set, [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup).
27
- We have found that **StarCoder has memorized at least 8% of the training samples** we used, which highlights the high risks of LLMs exposing the training set. We provide the notebook to reproduce our results [here](https://colab.research.google.com/drive/1YaaPOXzodEAc4JXboa12gN5zdlzy5XaR?usp=sharing).
28
 
29
  To evaluate memorization of the training set, we can prompt StarCoder with the first tokens of an example from the training set. If StarCoder completes the prompt with an output that looks very similar to the original sample, we will consider this sample to be memorized by the LLM.
30
  """
@@ -47,14 +47,14 @@ This means that an LLM performs verbatim memorization if parts of its training s
47
 
48
  ### Approximate memorization
49
 
50
- Therefore, a definition of approximate memozation was proposed in [Preventing Verbatim Memorization in Language
51
  Models Gives a False Sense of Privacy](https://arxiv.org/abs/2210.17546):
52
 
53
- A training sentence is approximatively memorized if the [BLEU score](https://huggingface.co/spaces/evaluate-metric/bleu) of the completed sentence and the original training sentence is above a specific threshold.
54
 
55
  **For this notebook, we will focus on approximate memorization, with a threshold set at 0.75.**
56
 
57
- The researchers found that the threshold of 0.75 provided good empriical results in terms of semantic and syntaxic similarity.
58
  """
59
 
60
  high_bleu_examples = {
@@ -267,15 +267,15 @@ with gr.Blocks() as demo:
267
  fn=low_bleu_mirror, cache_examples=True)
268
  with gr.Column():
269
  label = gr.Label(value={"BLEU": 0},label="Memorization score (BLEU)")
270
- gr.Markdown("""[BLEU](https://huggingface.co/spaces/evaluate-metric/bleu) score is a metric that can be used to measure similarity of two sentences.
271
- Here, the higher the BLEU score, the more likely the model learn by heart that example.
272
  You can reduce the Prefix size in the Advanced parameters to reduce the context length and see if the model still extracts the training sample.""")
273
 
274
  with gr.Row():
275
  with gr.Column():
276
  gr.Markdown("""# More samples from The Stack.
277
  The examples shown above come from [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup), an open-source dataset of code data.
278
- To try other examples from The Stack, you can browse the table below and click on training samples you wish to assess the memorisation score.""")
279
  with gr.Accordion("More samples", open=False):
280
  table = gr.DataFrame(value=df, row_count=5, label="Samples from The Stack", interactive=False)
281
  submit.click(
 
20
  title = "<h1 style='text-align: center; color: #333333; font-size: 40px;'> 🤔 StarCoder Memorization Checker"
21
 
22
  description = """
23
+ This ability of LLMs to learn their training set by heart can pose huge privacy issues, as many large-scale Conversational AI available commercially collect users' data at scale and fine-tune their models on it.
24
+ This means that if sensitive data is sent and memorized by an AI, other users can willingly or unwillingly prompt the AI to spit out this sensitive data.
25
 
26
+ To raise awareness of this issue, we show in this demo how much [StarCoder](https://huggingface.co/bigcode/starcoder), an LLM specialized in coding tasks, memorizes its training set, [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup).
27
+ We found that **StarCoder memorized at least 8% of the training samples** we used, which highlights the high risks of LLMs exposing the training set. We provide a notebook to reproduce our results [here](https://colab.research.google.com/drive/1YaaPOXzodEAc4JXboa12gN5zdlzy5XaR?usp=sharing).
28
 
29
  To evaluate memorization of the training set, we can prompt StarCoder with the first tokens of an example from the training set. If StarCoder completes the prompt with an output that looks very similar to the original sample, we will consider this sample to be memorized by the LLM.
30
  """
 
47
 
48
  ### Approximate memorization
49
 
50
+ Therefore, a definition of approximate memorization was proposed in [Preventing Verbatim Memorization in Language
51
  Models Gives a False Sense of Privacy](https://arxiv.org/abs/2210.17546):
52
 
53
+ A training sentence is approximately memorized if the [BLEU score](https://huggingface.co/spaces/evaluate-metric/bleu) of the completed sentence and the original training sentence is above a specific threshold.
54
 
55
  **For this notebook, we will focus on approximate memorization, with a threshold set at 0.75.**
56
 
57
+ The researchers found that the threshold of 0.75 provided good empirical results in terms of semantic and syntactic similarity.
58
  """
59
 
60
  high_bleu_examples = {
 
267
  fn=low_bleu_mirror, cache_examples=True)
268
  with gr.Column():
269
  label = gr.Label(value={"BLEU": 0},label="Memorization score (BLEU)")
270
+ gr.Markdown("""[BLEU](https://huggingface.co/spaces/evaluate-metric/bleu) score is a metric that can be used to measure the similarity of two sentences.
271
+ Here, the higher the BLEU score, the more likely the model will learn the example by heart.
272
  You can reduce the Prefix size in the Advanced parameters to reduce the context length and see if the model still extracts the training sample.""")
273
 
274
  with gr.Row():
275
  with gr.Column():
276
  gr.Markdown("""# More samples from The Stack.
277
  The examples shown above come from [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup), an open-source dataset of code data.
278
+ To try other examples from The Stack, you can browse the table below and click on the training samples you wish to assess the memorization score.""")
279
  with gr.Accordion("More samples", open=False):
280
  table = gr.DataFrame(value=df, row_count=5, label="Samples from The Stack", interactive=False)
281
  submit.click(