borgr commited on
Commit
0c35958
β€’
1 Parent(s): f77074b

Update contamination_report.csv

Browse files

## What are you reporting:
**Contaminated model(s)**: GPT-4

**Contaminated corpora**:
conll2003
nyu-mll/glue
rajpurkar/squad_v2
https://catalog.ldc.upenn.edu/LDC2006T06
quac;;GPT-4;Model
natural_questions
google/boolq
**Contaminated split(s)**: If the dataset has Train, Development and/or Test splits please report the contaminated split(s). You can report a percentage of the dataset contaminated; if the entire dataset is compromised, report 100%.
It is unclear what is the percentage, we just know the model regurgitates training validataion and test data and or metadata of each.

> You may also report instances where there is no contamination. In such cases, follow the previous instructions but report a contamination level of 0%.

## Briefly describe your method to detect data contamination

- [ ] Model-based approach

Description of your method, 3-4 sentences. Evidence of data contamination (Read below):
Prompt GPT and see that it knows to return metadata and training and val\test examples on its own.
see more here
https://hitz-zentroa.github.io/lm-contamination/blog/

#### Data-based approaches
Data-based approaches identify evidence of data contamination in a pre-training corpus by directly examining the dataset for instances of the evaluation data. This method involves algorithmically searching through a large pre-training dataset to find occurrences of the evaluation data. You should provide evidence of data contamination in the form: "dataset X appears in line N of corpus Y," "dataset X appears N times in corpus Y," or "N examples from dataset X appear in corpus Y."

#### Model-based approaches

Model-based approaches, on the other hand, utilize heuristic algorithms to infer the presence of data contamination in a pre-trained model. These methods do not directly analyze the data but instead assess the model's behavior to predict data contamination. Examples include prompting the model to reproduce elements of an evaluation dataset to demonstrate memorization (i.e https://hitz-zentroa.github.io/lm-contamination/blog/) or using perplexity measures to estimate data contamination (). You should provide evidence of data contamination in the form of evaluation results of the algorithm from research papers, screenshots of model outputs that demonstrate memorization of a pre-training dataset, or any other form of evaluation that substantiates the method's effectiveness in detecting data contamination. You can provide a confidence score in your predictions.

## Citation

Is there a paper that reports the data contamination or describes the method used to detect data contamination?
Blog post not paper, so we can create a bib if we want
URL: `[https://aclanthology.org/2023.findings-emnlp.722/](https://hitz-zentroa.github.io/lm-contamination/blog/)`
Citation: `@inproceedings{...`


*Important!* If you wish to be listed as an author in the final report, please complete this information for all the authors of this Pull Request.
- Full name: Leshem Choshen
- Institution: MIT-IBM watson AI lab, MIT
- Email: leshem.choshen@mail.huji.ac.il

Files changed (1) hide show
  1. contamination_report.csv +7 -0
contamination_report.csv CHANGED
@@ -1,5 +1,12 @@
1
  Evaluation Dataset;Subset;Contaminated Source;Model or corpus;Train Split;Development Split;Test Split;Approach;Reference;PR
2
 
 
 
 
 
 
 
 
3
 
4
  UCLNLP/adversarial_qa;adversarialQA;allenai/c4;corpus;;;0.03;data-based;https://arxiv.org/abs/2310.20707;2
5
  UCLNLP/adversarial_qa;adversarialQA;oscar-corpus/OSCAR-2301;corpus;;;0.03;data-based;https://arxiv.org/abs/2310.20707;2
 
1
  Evaluation Dataset;Subset;Contaminated Source;Model or corpus;Train Split;Development Split;Test Split;Approach;Reference;PR
2
 
3
+ conll2003;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
4
+ nyu-mll/glue;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
5
+ rajpurkar/squad_v2;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
6
+ https://catalog.ldc.upenn.edu/LDC2006T06;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
7
+ quac;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
8
+ natural_questions;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
9
+ google/boolq;;GPT-4;Model;;;;model-based;https://hitz-zentroa.github.io/lm-contamination/blog/;4
10
 
11
  UCLNLP/adversarial_qa;adversarialQA;allenai/c4;corpus;;;0.03;data-based;https://arxiv.org/abs/2310.20707;2
12
  UCLNLP/adversarial_qa;adversarialQA;oscar-corpus/OSCAR-2301;corpus;;;0.03;data-based;https://arxiv.org/abs/2310.20707;2