Datasets:

Modalities:
Text
ArXiv:
Tags:

Obtaining Filtered Samples

#12
by ssingh22 - opened

Hi, First of all, great release!

I would love to know if there is an efficient way to query subsamples by other metadata.

For e.g. URLs that follow a certain regex, or crawls on a certain date.

Can you please iterate on this, and possibly add an example.

Thanks!

Hi @ssingh22 , in the metadata field of the samples returned by the dataloader, we have the following fields:

{"url": "https://...", "partition": "...", "language": "...", "source_domain": "...", "date_download": "2023-01-26T21:20:33Z", "digest": "..."}

So for example, if you want to filter based on these fields to only include documents with https and .comurls, and only from crawl done in January, you can start from the following snippet:

from datasets import load_dataset
from datetime import datetime as dt
import json
import re

ds = load_dataset(
    "togethercomputer/RedPajama-Data-V2", name="sample", streaming=True
)
url_pattern = re.compile(r"https://[^\s]+.com/[^\s]+")

filtered_instances = []

for instance in ds["train"]:
    metadata = json.loads(instance["meta"])

    url = metadata["url"]
    if url_pattern.search(url) is None:
        continue

    date_download = metadata["date_download"]
    if dt.fromisoformat(date_download).month != 1:
        continue

    filtered_instances.append(instance)

Note that this is just an illustration, and if you want to process an entire snapshot you will need to adapt it for efficiency.

@mauriceweber This looks great, but doesnt this mean I will have to download the row before I can filter?

What I meant was perhaps we can index the metadata separately to get the indices I am interested in, this makes the processing faster and cheaper which will be an issue at this scale.

The metadata is usually ~1% of the actual content, so I would rather iterate on the metadata twice.

Together org

yes you're right, you will have to download the rows before filtering. We can in principle have a separate index on the metadata (i.e., paths /metadata/.../... .json.gz mirroring the documents files) -- I am taking this into our roadmap for now and we will work on it. However, even if you have a list of document ids, you probably still have to download the full documents files to extract the documents you want?

In the meantime, another option you can consider is to transfer the data to an S3 bucket and use S3 select to filter the data before transferring it to your instance.

@mauriceweber Yes, but my target size is around 1B Tokens, coming from 30T so it would be still effective. Sure, I will start with this in the meantime, thanks for taking this into account!

ssingh22 changed discussion status to closed

Sign up or log in to comment