--- license: mit --- # Wikipedia Summary Dataset 128k This is random subsample of 128k entries from the [wikipedia summary dataset](https://huggingface.co/datasets/mbukowski/wikipedia-summary-dataset), processed with the following code: ```python import pandas as pd df = pd.read_parquet('wikipedia-summary.parquet') df['l'] = df['summary'].str.len() rdf = df[(df['l'] > 300) & (df['l'] < 600)] # Filter out any rows 'topic' that have non-alphanumeric characters mask = rdf['topic'].str.contains(r'^[a-zA-Z0-9 ]+$') == True rdf = rdf[mask == True].sample(128000)[['topic', 'summary']].copy().sort_values('topic').reset_index(drop=True) rdf.to_csv('wikipedia-summary-128k.tsv', sep='\t', index=False) ``` ## Citation ``` @mastersthesis{scheepers2017compositionality, author = {Scheepers, Thijs}, title = {Improving the Compositionality of Word Embeddings}, school = {Universiteit van Amsterdam}, year = {2017}, month = {11}, address = {Science Park 904, Amsterdam, Netherlands} } ```