|
--- |
|
license: mit |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
## Overview |
|
|
|
The **StakcMIAsub** dataset serves as a benchmark for membership inference attack (MIA) topic. **StackMIAsub** is build based on the [Stack Exchange](https://archive.org/details/stackexchange) corpus, which is widely used for pre-training. See our paper (to-be-released) for detailed description. |
|
|
|
## Data format |
|
|
|
**StakcMIAsub** is formatted as a `jsonlines` file in the following manner: |
|
|
|
```json |
|
{"snippet": "SNIPPET1", "label": 1 or 0} |
|
{"snippet": "SNIPPET2", "label": 1 or 0} |
|
... |
|
``` |
|
- 📌 *label 1* denotes to members, while *label 0* denotes to non-members. |
|
|
|
## Applicability |
|
|
|
Our dataset supports most white- and black-box models, which are <span style="color:red;">released before May 2024 and pretrained with Stack Exchange corpus</span> : |
|
|
|
- **Black-box OpenAI models:** |
|
- *text-davinci-001* |
|
- *text-davinci-002* |
|
- *...* |
|
- **White-box models:** |
|
- *LLaMA and LLaMA2* |
|
- *Pythia* |
|
- *GPT-Neo* |
|
- *GPT-J* |
|
- *OPT* |
|
- *StableLM* |
|
- *Falcon* |
|
- *...* |
|
|
|
## Related repo |
|
|
|
To run our PAC method to perform membership inference attack, visit our [code repo](https://github.com/yyy01/PAC) |
|
|
|
## Cite our work |
|
⭐️ If you find our dataset helpful, please kindly cite our work : |
|
|
|
```bibtex |
|
@misc{ye2024data, |
|
title={Data Contamination Calibration for Black-box LLMs}, |
|
author={Wentao Ye and Jiaqi Hu and Liyao Li and Haobo Wang and Gang Chen and Junbo Zhao}, |
|
year={2024}, |
|
eprint={2405.11930}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
``` |