dhuynh95 commited on
Commit
b28677a
1 Parent(s): 24a44fa

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -24,7 +24,7 @@ This ability of LLMs to learn their training set by heart can pose huge privacy
24
  This means that if sensitive data is sent and memorized by an AI, other users' can willingly or unwillingly prompt the AI to spit out this sensitive data.
25
 
26
  To raise awareness of this issue, we show in this demo how much [StarCoder](https://huggingface.co/bigcode/starcoder), an LLM specializd in coding tasks, has memorized its training set, [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup).
27
- We have found that StarCoder has memorized at least 8% of the training samples we used, which highlights the high risks of LLMs exposing the training set. We provide the notebook to reproduce our results [here](https://colab.research.google.com/drive/1YaaPOXzodEAc4JXboa12gN5zdlzy5XaR?usp=sharing).
28
 
29
  To evaluate memorization of the training set, we can prompt StarCoder with the first tokens of an example from the training set. If StarCoder completes the prompt with an output that looks very similar to the original sample, we will consider this sample to be memorized by the LLM.
30
  """
 
24
  This means that if sensitive data is sent and memorized by an AI, other users' can willingly or unwillingly prompt the AI to spit out this sensitive data.
25
 
26
  To raise awareness of this issue, we show in this demo how much [StarCoder](https://huggingface.co/bigcode/starcoder), an LLM specializd in coding tasks, has memorized its training set, [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup).
27
+ We have found that **StarCoder has memorized at least 8% of the training samples** we used, which highlights the high risks of LLMs exposing the training set. We provide the notebook to reproduce our results [here](https://colab.research.google.com/drive/1YaaPOXzodEAc4JXboa12gN5zdlzy5XaR?usp=sharing).
28
 
29
  To evaluate memorization of the training set, we can prompt StarCoder with the first tokens of an example from the training set. If StarCoder completes the prompt with an output that looks very similar to the original sample, we will consider this sample to be memorized by the LLM.
30
  """