Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
Tags:
License:

How to use load_dataset() funtion from the datasets library to load a dataset from a locally downloaded .tgz file

#2
by KrishnaZ - opened

I wanted to get all the records in the cnn_dailymail dataset in a single csv , but have been unsuccessful in finding a way.

Currently I have downloaded the dataset locally (the file called cnn_stories.tgz). I have unzipped the .tgz and got a folder of .story files that has the text and summary for each record in the dataset. Because there are 100k records, I have got 100k .story files

The problem with such extraction is I have got 100k story files , each has a text and it's summary. Ideally I wanted it in a csv format where there are 2 columns - one for the article and the next for the highlights -- and he csv to contain 100k rows.

I want to only do this using a locally downloaded dataset(due to proxy issues in my work system)

@KrishnaZ hi, did you find a solution for this? im facing the same problem. thank you

Hi , I need to know if this database is cluster and organized like DUC2003 or DUC2004 ... because I need relevant and non relevant datasets.

from datasets import load_dataset
dataset = load_dataset("cnn_dailymail",version="3.0.0")

ValueError: At least one data file must be specified, but got data_files=None

I couldn't find the reason.

dataset = load_dataset('cnn_dailymail', '3.0.0')

This worked for me after several tries. Strange.

i used this codes and it worked evantually

pip install -U transformers
!pip install -U accelerate
!pip install -U datasets
!pip install -U bertviz
!pip install -U umap-learn
!pip install -U sentencepiece
!pip install -U urllib3
!pip install py7zr

Sign up or log in to comment