Datasets:
metadata
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 53397189200
num_examples: 2004700
- name: test
num_bytes: 532720000
num_examples: 20000
download_size: 16064350610
dataset_size: 53929909200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: other
task_categories:
- text-generation
language:
- pt
tags:
- portuguese
- language-modeling
pretty_name: Pt-Corpus tokenized
size_categories:
- 1M<n<10M
Portuguese-Corpus (tokenized)
Table of Contents
Dataset Description
- Homepage: https://nkluge-correa.github.io/TeenyTinyLlama/
- Repository: https://github.com/Nkluge-correa/TeenyTinyLlama
- Paper: TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese
- Point of Contact: Nk-correa
Dataset Summary
This repository has a tokenized version (using the TeenyTinyLlama tokenizer) of the Portuguese-Corpus dataset. All sequences are 2048 tokens long. This dataset was used in "TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese".
For more information, see the original dataset card.
Languages
Portuguese.
Dataset Structure
Data Instances
The dataset consists of the following features:
- input_ids: sequence of tokens.
- attention_mask: binary tensor indicating the position of the padded indices.
- labels: sequence of tokens.
Data Fields
{
"input_ids": [ 1026, 1531, 1009, 8067,...],
"attention_mask": [1, 1, 1, 1, ...],
"labels": [ 1026, 1531, 1009, 8067,...]
}
Data Splits
Available splits are train
(~ 2M) and test
(20K).
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/Pt-Corpus-tokenized", split='train')
# If you don't want to download the entire dataset, set streaming to `True`
dataset = load_dataset("nicholasKluge/Pt-Corpus-tokenized", split='train', streaming=True)
Additional Information
Dataset Curators
Citation Information
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
Contributions
If you would like to contribute, contact me at nicholas@airespucrs.org!