Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Downloading the v1_6_sample data

#20
by ibenshaul - opened

Has anyone managed loading this data? I followed the instructions, but seems that the loading from hf still tries to download the entire dataset. I've also tried using revision parameter/ data_dir parameter without luck

Ai2 org

Hi, I am not sure that I understand your problem. Could you clarify what you mean by the entire dataset, and what you are hoping to download instead? If you are trying to download just v1_6-sample but v1_6 is getting downloaded instead, then can you please share your repro steps?

Ai2 org

I attempted to repro the behavior seen here. Based on the instructions, I tried to download the data first with wget, and then with load_datasets. The load_datasets step always downloaded v1_6, no matter how I set the data dir. If I passed name="v1_6-sample" to the load_datasets, then the correct data began to download.

@soldni The downloading from wget seems to have no discernible influence on the load_datasets steps; they seem to be 2 different ways of downloading the dataset. Is that correct? If so, can you clarify the README to indicate wget and load_datasets as 2 different ways of getting the data? If not, please clarify their relationship in the README.

Same here. The load_datasets step results in a connection error.

....
ConnectionError: Couldn't reach [URL omitted for safety]v1_5r2_sample-0000.json.gz (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))

Can someone provide a minimal example on how to load the data from local copy (downloaded using wget) with huggingface? The code on the website and various trivial modifications (e.g. adding the version to load_dataset) do not work.

here is what i ended up using (maybe too much info, but in case it helps someone):

I had some issues setting up my HF to work with ssh and cloning; I created my .ssh token, but I then had to add it to my ssh client:

ssh-add ~/.ssh/huggingface

where huggingface was the name of my key. then you should be able to run "ssh -T git@hf.co" and see a response "Hi {username}, welcome to Hugging Face"

then i made a .sh file:

DATA_DIR='/home/ubuntu/data'
PARALLEL_DOWNLOADS='10'
DOLMA_VERSION='v1_6-sample'

git clone git@hf.co:datasets/allenai/dolma

mkdir -p "${DATA_DIR}"

cat "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DATA_DIR"

and ran it, and the files downloaded without error. to actually load the files, I ran:

from datasets import load_dataset

Set the path to the directory where your JSON files are located

data_dir = '/home/ubuntu/data'
file_pattern = 'v1_5r2_sample-*.json.gz' # Adjust the pattern to match your files

Load the dataset from the specified JSON files

dataset = load_dataset('json', data_files=f'{data_dir}/{file_pattern}', split='train', stream=True)

print(dataset)

Note I had to use stream to load it otherwise my RAM got shot

Thanks @aaronmac ! An updated version for newer versions of datasets (and in case you want to read fewer files):

Shell script to download subset of dataset

#!/bin/bash

DATA_DIR='/home/ubuntu/data/dolma'
PARALLEL_DOWNLOADS='10'
DOLMA_VERSION='v1_6-sample'

# Clone the repository if it doesn't already exist
if [ ! -d "dolma" ]; then
    git clone git@hf.co:datasets/allenai/dolma
fi

mkdir -p "${DATA_DIR}"

# Download only the first 10 files of 103
echo "Downloading the first 10 files from Dolma"

head -n 10 "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DATA_DIR"

echo "Finished downloading the first 10 files from Dolma. Load data using the following code:"
echo ""
echo "from datasets import load_dataset"
echo "file_pattern = 'v1_5r2_sample-*.json.gz'"
echo "dataset = load_dataset('json', data_files=f'{args.data_dir}/{file_pattern}', split='train', streaming=True)"

And in your python file:

file_pattern = 'v1_5r2_sample-*.json.gz' 

dataset = load_dataset(
   'json', 
   data_files=f'{args.data_dir}/{file_pattern}', 
   split='train', 
   streaming=True
)

Sign up or log in to comment