KidLM-corpus / README.md
tafseer-nayeem's picture
Update README.md
6f08289 verified
metadata
license: cc-by-nc-sa-4.0
language:
  - en
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - data/train.json
task_categories:
  - text-generation
tags:
  - Human-centered NLP
  - KidLM (corpus)
  - Language Models for Children
  - User-Centric Data Collection Pipeline
pretty_name: KidLM (corpus) - Advancing Language Models for Children

Dataset Card for KidLM

KidLM (corpus)

Dataset Description

Dataset Summary

The KidLM corpus consists of high-quality, child-appropriate content specifically written for children and occasionally by them. This content has been meticulously reviewed and validated by website editors or moderators to ensure suitability and to eliminate any inappropriate content or sensationalism. Our user-centric data collection pipeline with quality filtering is comprehensive, diverse, and carefully tailored for developing language models aimed at young audiences. For more details, please refer to our EMNLP 2024 paper.

Languages

English

Loading the Dataset

from datasets import load_dataset

ds = load_dataset("tafseer-nayeem/KidLM-corpus")

train_dataset = ds["train"]

print(f'Number of samples: {len(train_dataset)}')

Dataset Structure

Data Instances

One example from our KidLM (corpus) is given below in JSON format.

    {
        "_id": 6,
        "text": "Dink, Josh, and Ruth Rose are on their fun and exciting mystery! Dink has Josh and Ruth Rose to help him with his mystery. Josh likes to eat. Ruth Rose dresses all in one color including her headband. The fifth envelope has something fishy with it. It’s empty but it has something behind the stamps. Read to find out what happens. I think Dink, Josh, and Ruth Rose are curious, excited, happy, and sometimes playful. They are best friends and they want to help each other whatever comes their way. I like this book because Ron Roy has a series called A to Z Mysteries. The series always has a new crime to solve. This book connects to my life because I love solving mysteries. And I have mysteries to solve at my house, a lot of them. Ron Roy is a great writer. I bet he knows a lot about mysteries because his books are very suspenseful. Once you get into the story the crime unfolds but you don’t know who did it until the very end. My favorite part was when they get to drag the criminals off to jail. I can’t tell you who it is because that is telling too much.I would recommend this book for ages 5-18 because they might like mysteries too. And they might want to learn more about solving a crime."
    }

Data Fields

  • _id (int): unique identifier for this sample from KidLM (corpus).
  • text (string): the main text content of the document.

User-Centric Data Collection Pipeline

Our user-centric approach includes gathering text written specifically for or by children. This data is vetted for appropriateness and adjusted to avoid inappropriate material. We focus on both the demographics of content creators and the intended audience, ensuring suitability for young users.

User-Centric Data Collection Pipeline for our KidLM (corpus).
  • Content Sources: Includes text specifically written for and occasionally by children.

  • Thorough Review: All content is reviewed and validated by website editors or moderators.

  • Ensuring Suitability: Emphasis on appropriateness, avoiding sensationalism or inappropriate material.

  • Two Key Aspects:

    • "Who?": Demographics and intentions of content creators.
    • "Whom?": Intended audience, ensuring the content is suitable for children.

Dataset Stats

KidLM corpus spans genres like science, sports, and history, collected from 21 sources globally. This diversity supports linguistic variety, while 67.97 million tokens to support robust language model training.

SN. Data Sources #Docs #Sents Avg. #Sents Avg. #Words
1 CBC Kids 262 5,959 22.74 (± 16.33) 349.63 (± 252.02)
2 CBC Kids News 2,559 62,293 24.34 (± 15.04) 531.2 (± 339.02)
3 Curious Times 8,493 107,649 12.68 (± 11.13) 206.23 (± 179.84)
4 The Kids News 450 12,776 28.39 (± 20.26) 554.79 (± 381.31)
5 Kids Frontiers 1,210 121,156 100.13 (± 21.83) 2240.82 (± 481.03)
6 Kids News & Reviews 84 5,004 59.57 (± 40.99) 1267.42 (± 895.29)
7 Kids’ News NYC 238 7,708 32.39 (± 21.29) 692.54 (± 456.23)
8 Kids News (India) 2,637 32,324 12.26 (± 14.35) 226.59 (± 255.4)
9 Kids Press 1,628 39,738 24.41 (± 11.81) 475.77 (± 214.47)
10 News for Kids 1,619 57,079 35.26 (± 9.91) 608.63 (± 172.56)
11 Smithsonian Magazine 20 1,043 52.15 (± 41.44) 1190.25 (± 870.1)
12 Teaching Kids News 1,127 37,403 33.19 (± 10.05) 636.12 (± 197.06)
13 Time for Kids 2,109 44,413 21.06 (± 18.2) 294.71 (± 291.46)
14 Twinkl Newsroom 876 19,408 22.16 (± 9.32) 375.22 (± 142.62)
15 Washington Post (Kids) 1,622 48,132 29.67 (± 17.08) 573.27 (± 297.04)
16 Indy Kids 1,658 21,671 13.07 (± 14.36) 306.26 (± 324.27)
17 Kids News 915 20,052 21.91 (± 31.67) 586.23 (± 606.99)
18 Kiwi Kids News 7,163 28,936 4.04 (± 4.67) 159.21 (± 125.7)
19 Spaghetti Book Club 12,095 168,346 13.92 (± 6.11) 227.12 (± 100.97)
20 Toppsta 34,471 146,302 4.24 (± 2.96) 117.62 (± 81.22)
21 Simple Wiki 205K 1.924M 9.37 (± 17.98) 185.59 (± 406.98)

Data Diversity & Quantity

  • Data Diversity:

    • Corpus includes a variety of genres: science, sports, history, animals, geography, technology, current events, book reviews, and more.

    • Data collected from 21 sources across different regions: USA (4), India (4), Canada (3), Australia (1), UK (1), New Zealand (1), and other global sources (7).

  • Data Quantity:

    • KidLM corpus comprises 286,000+ documents, 2.91 million sentences, and 50.43 million words resulting in 67.97 million tokens.

Data Verification and Filtering

Data Verification

  • We manually reviewed the data sources.
  • Focused on the "about" sections of identified websites.
  • Ensured the quality and relevance of each source for inclusion.

Filtering

Quality Filtering

  1. Extracted articles tagged specifically for children.
  2. Identified content labeled as “kidspost.”
  3. Excluded articles marked as potentially inappropriate (e.g., tagged with red).
  4. Selected data relevant to specific grade levels (K-1, 2-3, 4-5, and 6).

Additional Filtering

  • Language Filtering:

    • Retained only English-language texts.
    • Filtered out code-switched and code-mixed texts.
    • Used spacy-langdetect toolkit for language detection.
      • Kept sentences with a confidence score of ≥ 0.9 to filter out code-mixed texts.
  • Personal Identifying Information (PII)

    • Data Anonymization: Avoided collecting author names to ensure privacy.
    • Preprocessing: Removed personal contact details (e.g., emails, phone numbers, Twitter handles) using regular expressions from the texts.

Considerations for Using the Data

Social Impact of Dataset

The KidLM corpus has the potential to positively impact children’s digital experiences by providing a foundation for developing language models tailored specifically for young audiences. Given that children constitute one in three internet users globally, with those aged 8-12 spending over five hours daily on screens, there is a pressing need for safe, educational, and child-appropriate online content. The KidLM dataset, meticulously curated with high-quality content written for and occasionally by children, addresses this need by ensuring the content is suitable and free from inappropriate or sensational material. For more details, please refer to our EMNLP 2024 paper.

Discussion of Ethics

We took ethical consideration into account when scraping data from the sources. The data we have collected is intended exclusively for non-commercial research purposes. We conducted our web scraping activities at a reasonable rate, with no intention of causing a Distributed Denial of Service (DDoS) attack.

Discussion of Biases

We made significant efforts to minimize offensive content in the pre-training data by deliberately crawling sites where such content is minimal. However, we cannot provide an absolute guarantee that no such content is present. We strongly recommend exercising caution when directly using the KidLM (corpus).

Protection of Privacy

We deliberately chose not to collect specific information, such as author names (whether they are children or reporters) and the publication dates of articles. Additionally, we preprocess the data to remove any personal contact details, including email addresses, phone numbers, and Twitter handles, by applying simple regular expressions to the pre-training corpus. As a result, our dataset minimizes the presence of Personal Identifying Information (PII). This decision highlights our commitment to prioritizing user privacy.


Licensing Information

Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.


Citation Information

If you use any of the resources or it's relevant to your work, please cite our EMNLP 2024 paper.

@inproceedings{nayeem-rafiei-2024-kidlm,
    title = "{K}id{LM}: Advancing Language Models for Children {--} Early Insights and Future Directions",
    author = "Nayeem, Mir Tafseer  and
      Rafiei, Davood",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.277",
    pages = "4813--4836",
    abstract = "Recent studies highlight the potential of large language models in creating educational tools for children, yet significant challenges remain in maintaining key child-specific properties such as linguistic nuances, cognitive needs, and safety standards. In this paper, we explore foundational steps toward the development of child-specific language models, emphasizing the necessity of high-quality pre-training data. We introduce a novel user-centric data collection pipeline that involves gathering and validating a corpus specifically written for and sometimes by children. Additionally, we propose a new training objective, Stratified Masking, which dynamically adjusts masking probabilities based on our domain-specific child language data, enabling models to prioritize vocabulary and concepts more suitable for children. Experimental evaluations demonstrate that our model excels in understanding lower grade-level text, maintains safety by avoiding stereotypes, and captures children{'}s unique preferences. Furthermore, we provide actionable insights for future research and development in child-specific language modeling.",
}

Contributors