Dataset is missing "features" and "column_names"

#2
by dinalt - opened

This dataset is missing the "features" and "column_names" fields, which makes it difficult to use with map, e.g.

tokenized_split = split.map(
    tokenize_function,
    remove_columns=split.column_names
   ...
)

As these fields are "None," when passed to map as "remove_columns," the original columns are not removed. The result is that an unequal number of columns are added by the "tokenize" function and this trips an assert on the size mismatch.

The individual records do contain column names, which means you can, in theory, iterate over some number of records, recording the names as you go, and build a list of column names -- but this requires reworking your code. This is also sub-optimal, as diagnosing the problem is time-consuming.

If you need the column names, I have a workaround, below.

dataset = load_dataset("venketh/SlimPajama-62B", streaming=True)
print(dataset['train'].column_names)
print(dataset.features)
---
None
None

Contrast with this similar dataset:

dataset = load_dataset("DKYoon/SlimPajama-6B", streaming=True)
print(dataset['train'].column_names)
print(dataset['train'].features)
['text', 'meta', '__index_level_0__']
{'text': Value(dtype='string', id=None), 'meta': {'redpajama_set_name': Value(dtype='string', id=None)}, '__index_level_0__': Value(dtype='int64', id=None)}

You can 'kind-of' work around this issue like this:

column_names = []

for i, record in enumerate(dataset['train']):
    for key in record.keys():
        column_names.append(key)
    records.append(record)
    break

print(column_names)
['text', 'meta']

Unfortunately, it does not appear possible to set the column_names filed in the split. I'm not sure why.

Sign up or log in to comment