Back to all datasets
Dataset: wikihow 🏷
Update on GitHub

How to load this dataset directly with the 🤗/nlp library:

Copy to clipboard
from nlp import load_dataset dataset = load_dataset("wikihow")


WikiHow is a new large-scale dataset using the online WikiHow ( knowledge base. There are two features: - text: wikihow answers texts. - headline: bold lines as summary. There are two separate versions: - all: consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries. - sep: consisting of each paragraph and its summary. Download "wikihowAll.csv" and "wikihowSep.csv" from and place them in manual folder Train/validation/test splits are provided by the authors. Preprocessing is applied to remove short articles (abstract length < 0.75 article length) and clean up extra commas.


    title={WikiHow: A Large Scale Text Summarization Dataset},
    author={Mahnaz Koupaee and William Yang Wang},

Models trained or fine-tuned on wikihow

None yet. Start fine-tuning now =)