pratyushmaini
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -622,3 +622,62 @@ configs:
|
|
622 |
- split: val
|
623 |
path: youtubesubtitles/val-*
|
624 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
622 |
- split: val
|
623 |
path: youtubesubtitles/val-*
|
624 |
---
|
625 |
+
|
626 |
+
# LLM Dataset Inference
|
627 |
+
|
628 |
+
This repository contains various subsets of the PILE dataset, divided into train and validation sets. The data is used to facilitate privacy research in language models, where perturbed data can be used as a reference to detect the presence of a particular dataset in the training data of a language model.
|
629 |
+
|
630 |
+
## Data Used
|
631 |
+
|
632 |
+
The data is in the form of JSONL files, with each entry containing the raw text, as well as various kinds of perturbations applied to it.
|
633 |
+
|
634 |
+
## Quick Links
|
635 |
+
|
636 |
+
- [**arXiv Paper**](): Detailed information about the Dataset Inference V2 project, including the dataset, results, and additional resources.
|
637 |
+
- [**GitHub Repository**](): Access the source code, evaluation scripts, and additional resources for Dataset Inference.
|
638 |
+
- [**Dataset on Hugging Face**](https://huggingface.co/datasets/pratyushmaini/llm_dataset_inference): Direct link to download the various versions of the PILE dataset.
|
639 |
+
- [**Summary on Twitter**](): A concise summary and key takeaways from the project.
|
640 |
+
|
641 |
+
## Applicability 🚀
|
642 |
+
|
643 |
+
The dataset is in text format and can be loaded using the Hugging Face `datasets` library. It can be used to evaluate any causal or masked language model for the presence of specific datasets in its training pool. The dataset is *not* intended for direct use in training models, but rather for evaluating the privacy of language models. Please keep the validation sets, and the perturbed train sets private, and do not use them for training models.
|
644 |
+
|
645 |
+
## Loading the Dataset
|
646 |
+
|
647 |
+
To load the dataset, use the following code:
|
648 |
+
|
649 |
+
```python
|
650 |
+
from datasets import load_dataset
|
651 |
+
|
652 |
+
dataset = load_dataset("pratyushmaini/llm_dataset_inference", subset="wikipedia", split="train")
|
653 |
+
```
|
654 |
+
|
655 |
+
Note: When loading the dataset, you must specify a subset. If you don't, you'll encounter the following error:
|
656 |
+
|
657 |
+
```
|
658 |
+
ValueError: Config name is missing.
|
659 |
+
Please pick one among the available configs: ['arxiv', 'bookcorpus2', 'books3', 'cc', 'enron', 'europarl', 'freelaw', 'github', 'gutenberg', 'hackernews', 'math', 'nih', 'opensubtitles', 'openwebtext2', 'philpapers', 'stackexchange', 'ubuntu', 'uspto', 'wikipedia', 'youtubesubtitles']
|
660 |
+
Example of usage:
|
661 |
+
`load_dataset('llm_dataset_inference', 'arxiv')`
|
662 |
+
```
|
663 |
+
|
664 |
+
Correct usage example:
|
665 |
+
```python
|
666 |
+
ds = load_dataset("pratyushmaini/llm_dataset_inference", "arxiv")
|
667 |
+
```
|
668 |
+
|
669 |
+
## Available Perturbations
|
670 |
+
|
671 |
+
We use the NL-Augmenter library to apply the following perturbations to the data:
|
672 |
+
|
673 |
+
- `synonym_substitution`: Synonym substitution of words in the sentence.
|
674 |
+
- `butter_fingers`: Randomly changing characters from the sentence.
|
675 |
+
- `random_deletion`: Randomly deleting words from the sentence.
|
676 |
+
- `change_char_case`: Randomly changing the case of characters in the sentence.
|
677 |
+
- `whitespace_perturbation`: Randomly adding or removing whitespace from the sentence.
|
678 |
+
- `underscore_trick`: Adding underscores to the sentence.
|
679 |
+
|
680 |
+
|
681 |
+
## Contact
|
682 |
+
|
683 |
+
Please email `pratyushmaini@cmu.edu` in case of any queries regarding the dataset
|