update github links in README
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ pretty_name: SlimPajama-627B
|
|
7 |
---
|
8 |
The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of [Together's RedPajama](https://github.com/togethercomputer/redpajama-data).
|
9 |
|
10 |
-
Check out our [blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) explaining our methods and join the discussion on the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
|
11 |
|
12 |
## Getting Started
|
13 |
You can download the dataset using Hugging Face datasets:
|
@@ -28,7 +28,7 @@ In addition to the data, we are also releasing the tools we built to create Slim
|
|
28 |
2. Releasing validation and test sets, 500M tokens each, which has been decontaminated against the training data.
|
29 |
3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open-source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.
|
30 |
|
31 |
-
The full set of scripts to recreate the dataset from the original RedPajama dataset
|
32 |
|
33 |
## Dataset Summary
|
34 |
|
|
|
7 |
---
|
8 |
The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of [Together's RedPajama](https://github.com/togethercomputer/redpajama-data).
|
9 |
|
10 |
+
Check out our [blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) explaining our methods, [our code on GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama), and join the discussion on the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
|
11 |
|
12 |
## Getting Started
|
13 |
You can download the dataset using Hugging Face datasets:
|
|
|
28 |
2. Releasing validation and test sets, 500M tokens each, which has been decontaminated against the training data.
|
29 |
3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open-source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.
|
30 |
|
31 |
+
The full set of scripts to recreate the dataset from the original RedPajama dataset are available on the [Cerebras GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama). A deeper explanation of our cleaning and deduplication process can be found in the [SlimPajama blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama).
|
32 |
|
33 |
## Dataset Summary
|
34 |
|