dfn-200m / README.md
adams-story's picture
Update README.md
849c67c verified
|
raw
history blame
1.58 kB
metadata
task_categories:
  - image-to-text
  - text-to-image
pretty_name: Data Filtering Networks, 200m, datacomp large
size_categories:
  - 100M<n<1B

Data filtering networks, 200m

This is a dataset released from the Data Filtering Networks paper. It consists of a subset of Datacomp large.

These parquet files are that subset. The following script was used to filter the parquet files using the subset from apf1/datafilteringnetworks_2b.

import os
from os import path
import numpy as np
import pyarrow.parquet as pq
from glob import glob
from multiprocessing import Pool

parquet_files = list(glob("../*.parquet"))
out_path = "../resampled/"
os.makedirs(out_path, exist_ok=True)
subset_file = "../indices/datacomp_large_dfn_200m_inds.npy"
u16 = np.dtype("u8,u8")


def load_subset():
    return np.load(subset_file, mmap_mode="r")


def process_parquet(parquet_file):
    print("filtering", parquet_file)
    subset = load_subset()
    table = pq.read_table(parquet_file)
    mask = []
    for uid in table["uid"]:
        uid = str(uid)
        key_u16 = np.array([divmod(int(uid, 16), 2**64)], u16)[0]
        a = np.searchsorted(subset, key_u16, "left")
        b = np.searchsorted(subset, key_u16, "right")
        count = b - a

        assert count == 1 or count == 0

        mask.append(count == 1)

    table = table.filter(mask)

    out_filename = out_path + "/" + path.basename(parquet_file)
    pq.write_table(table, out_filename)

    print("wrote ", out_filename)


with Pool(4) as pool:
    pool.map(process_parquet, parquet_files)

print("done.")