File size: 1,405 Bytes
838ece9
acc875c
93914f2
9a3a8ad
 
 
 
 
 
 
 
 
 
 
93914f2
9a3a8ad
93914f2
acc875c
 
 
 
 
 
 
 
 
 
 
28f4417
acc875c
 
bc1b652
 
 
 
93914f2
 
 
 
acc875c
fd2db23
acc875c
fd2db23
 
 
 
 
 
 
 
 
 
be7ed33
 
fd2db23
54b94cc
 
 
 
 
 
 
83c7087
 
54b94cc
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
dataset_info:
- config_name: '20231001'
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 2150584347
    num_examples: 1857355
  download_size: 0
  dataset_size: 2150584347
- config_name: latest
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 2150584347
    num_examples: 1857355
  download_size: 0
  dataset_size: 2150584347
configs:
- config_name: '20231001'
  data_files:
  - split: train
    path: 20231001/train-*
- config_name: latest
  data_files:
  - split: train
    path: latest/train-*
---
# Dataset Card for Wikipedia - Portuguese


## Dataset Description

- latest
- 20231001

## Usage
```python
from datasets import load_dataset

dataset = load_dataset('pablo-moreira/wikipedia-pt', 'latest')
#dataset = load_dataset('pablo-moreira/wikipedia-pt', '20231001')
```

## Extractor

Notebook with the code for extracting documents from the Wikipedia dump based on the code from the FastAI NLP introduction course.

[Notebook](extractor.ipynb)

## Links

- **[Wikipedia dumps](https://dumps.wikimedia.org/)**
- **[A Code-First Intro to Natural Language Processing](https://github.com/fastai/course-nlp)**
- **[Extractor Code](https://github.com/fastai/course-nlp/blob/master/nlputils.py)**