---
language: pt
license: cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
size_categories:
- 100M
[🐱 Github]
# Canarim: A Large-Scale Dataset of Web Pages in the Portuguese Language
## Introduction
Canarim is a database encompassing over 342 million Portuguese language documents, sourced from multiple iterations of CommonCrawl. This nearly 1 terabyte database stands as one of the most extensive Portuguese language data collections available. It underwent initial deduplication using URLs, with plans for further text-based deduplication and filtering of potentially harmful content. The data, originally in HTML, has been converted to Markdown with the `Trafilatura` library to enhance readability and quality. Canarim is poised to be a crucial resource for NLP research, particularly in Portuguese language applications, filling the gap in large-scale, high-quality data for languages other than English.
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'url': '...',
'content_languages': 'por',
'warc_filename': 'crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00352.warc.gz',
'warc_record_offset': 971279893,
'warc_record_length': 3873,
'text': '...',
'crawl_timestamp': '2023-02-02T20:28:21Z'
}
```
### Data Fields
- `url`: URL of the page
- `content_languages`: Language of the page
- `warc_filename`: Name of the WARC file
- `warc_record_offset`: Offset of the WARC record
- `warc_record_length`: Length of the WARC record
- `text`: Text of the page, in Markdown format
- `crawl_timestamp`: Timestamp of the crawl
WIP