SkyPile-150B / README.md
520jefferson's picture
Update README.md
d5b28b1
|
raw
history blame
1.61 kB
metadata
task_categories:
  - text-generation
language:
  - zh
tags:
  - 'llm '
  - casual-lm
  - language-modeling
pretty_name: SkyPile-150B
size_categories:
  - 100B<n<1T

SkyPile-150B

Dataset Summary

SkyPile-150B is created to serve as a large-scale Chinese dataset for pre-training of large language models. It is taken from publicly available Chinese Internet web pages. Through strict filtering and extensive deduplication then sensitive data filtering, At the same time, fastText and Bert are used to filter low-quality data.

The public part of SkyPile-150B contains approximately 166M individual web pages, with an average of over 1k Chinese characters, a total of approximately 150B tokens, and 592G plain text data.

Language

SkyPile-150B is Chinese data.

Data Field

text:The processed and cleaned text contained in the page.

Sensitive information and bias

Because SkyPile-150B is built on publicly available web pages, it may contain sensitive information such as emails, phone numbers, or IP addresses. We believe that deduplication and low-quality filtering may help reduce this data, but practitioners working with SkyPile-150B should be careful. Since toxic or biased data is prevalent on the Internet, although we filter it using specific URL filtering methods, users should be aware.

Social Impact of Dataset

With the open source release of SkyPile-150B, we aim to increase access to high-quality web data which has typically been held private by model developers. We believe this release will in turn improve the accessibility and the spread of performant large language models.