Error when loading book/book.jsonl using load_dataset

#22
by icycold - opened

Downloaded the single file of book/book.jsonl through the shell script and checked the sha256sum is right. But got error when loading the book/book.jsonl using load_dataset, getting the following error:

File "/opt/conda/lib/python3.8/site-packages/datasets/table.py", line 2144, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<short_book_title: string, publication_date: int64, url: string, title: string>
to
{'short_book_title': Value(dtype='string', id=None), 'publication_date': Value(dtype='int64', id=None), 'url': Value(dtype='string', id=None)}

Any ideas about how to solve it ? Thanks

Together org

Hi @icycold , can you show me the code that you are using to download the dataset?

while read line; do
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
echo " ->downloading $dload_loc..."
#dir_name=$(dirname $dload_loc)
mkdir -p $(dirname $dload_loc)
wget "$line" -O "$dload_loc"
echo " ->dir $dload_loc"
file=$hdfs_base/$(dirname $dload_loc)
#echo "$file"
if $(hadoop fs -test -d $file) ; then echo "dir $file exists"; else hdfs dfs -mkdir -p $file; fi
echo " ->put $dload_loc to $file"
hdfs dfs -put -f $dload_loc $file
hdfs_file=$hdfs_base/$dload_loc
echo " ->hdfs file: $hdfs_file"
if $(hadoop fs -test -f $hdfs_file) ; then rm -rf $dload_loc;echo " ->file $dload_loc upload to hdfs success" ; else echo " ->file $dload_loc upload to hdfs fail" ; fi
#break
done < urls.txt

======
that's the shell downloaded from the git

Together org

thanks, I don't see an issue here. I will try to reproduce your error, can you provide more details about how you load the the dataset? Judging from the stacktrace, it looks like the title field is missing leading to the casting error.

https://data.together.xyz/redpajama-data-1T/v1.0.0/book/book.jsonl -> download this file using the download shell
cat book_SHA256SUMS.txt
50dd6f8ec8a69ba4b062038b49310132db5897b3f4ae02679b8afdf427c44e80 book/book.jsonl -> get the sha256sum from git

50dd6f8ec8a69ba4b062038b49310132db5897b3f4ae02679b8afdf427c44e80 book/book.jsonl -> that's downloaded book.jsonl's sha256sum, it's exactly same with the git's

load the dataset through:

import argparse
import os
import resource
from contextlib import nullcontext
from functools import partial
from typing import Optional, Tuple

import torch
import torch.distributed as dist
import torch.nn as nn
from attn import SUPPORT_XFORMERS, replace_xformers
from data_utils import load_json, prepare_dataloader, save_json
from datasets import load_dataset

dataset = load_dataset(args.dataset)

Together org

Thanks for the details @icycold !

I think what's causing the issue here is that the books split contains metadata with inconsistent schema. That is, some records contain metadata with {"short_book_title": "...", "publication_date": "...", "url": "..."}, while other records contain metadata with {"title": "..."}. This is due to how the books split was built (it is a mix between books3 and pg-19). Now, if you load the dataset using load_dataset(args.dataset), this inconsistency leads to the casting error you observed above.

To address this, you can do one of the following:

  1. Write your own dataset loading script (check out instructions here). You can base this on our implementation here.
  2. If you don't need a dataset instance, you can also just iterate over the lines in the json file and write your own generator (it's also worth considering the msgspec library which can speed things up)

Sign up or log in to comment