Datasets:
Tasks:
Text Generation
Sub-tasks:
language-modeling
Languages:
code
Multilinguality:
monolingual
Size Categories:
unknown
Tags:
License:
Duplicates
#2
by
debadeepta
- opened
This dataset has a lot of duplicates. Example indices [0, 234234, 473960, 565389]. Some elements are repeated up to 190 times even. Could use a deduplication step.
quick example script to see duplicates
from datasets import load_from_disk, load_dataset
import numpy as np
from collections import defaultdict
def main():
dataset = load_dataset("codeparrot/github-jupyter-text-code-pairs")
dataset = dataset['train']
dups_inds = defaultdict(list)
def count_orig_dups(examples, indices):
for index, orig_index in enumerate(indices):
concat_str = examples["markdown"][index] + " \n " + examples["code"][index]
dups_inds[concat_str].append(orig_index)
_ = dataset.map(count_orig_dups,
with_indices=True,
batched=True,
batch_size=1000,
num_proc=1)
dups_freqs = []
for k, v in dups_inds.items():
if len(v) > 1:
dups_freqs.append(len(v))
print('done')
if name == 'main':
main()`
Hi thank you for bringing this to our attention, this is indeed a raw parsed dataset from jupyter notebooks where some cells are repeated. We will do the necessary deduplication
@debadeepta we updated the dataset and it is now deduplicated.
loubnabnl
changed discussion status to
closed