wikipedia-pt / README.md
pablo-moreira's picture
feat: update README.md
83c7087
metadata
dataset_info:
  - config_name: '20231001'
    features:
      - name: id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 2150584347
        num_examples: 1857355
    download_size: 0
    dataset_size: 2150584347
  - config_name: latest
    features:
      - name: id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 2150584347
        num_examples: 1857355
    download_size: 0
    dataset_size: 2150584347
configs:
  - config_name: '20231001'
    data_files:
      - split: train
        path: 20231001/train-*
  - config_name: latest
    data_files:
      - split: train
        path: latest/train-*

Dataset Card for Wikipedia - Portuguese

Dataset Description

  • latest
  • 20231001

Usage

from datasets import load_dataset

dataset = load_dataset('pablo-moreira/wikipedia-pt', 'latest')
#dataset = load_dataset('pablo-moreira/wikipedia-pt', '20231001')

Extractor

Notebook with the code for extracting documents from the Wikipedia dump based on the code from the FastAI NLP introduction course.

Notebook

Links