inarikami commited on
Commit
a365dcb
1 Parent(s): e5a4093

add example load to readme

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -4,6 +4,23 @@ This dataset is a comprehensive pull of all Japanese wikipedia article data as o
4
 
5
  *Note:* Right now its uploaded as a single cleaned gzip file (for faster usage), I'll update this in the future to include a huggingface datasets compatible class and better support for japanese than the existing wikipedia repo.
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  The wikipedia articles were processed from their compressed format into a 7 GB jsonl file with filtering removing extraneous characters using the repo: https://github.com/singletongue/WikiCleaner.
8
 
9
  Sample Text:
 
4
 
5
  *Note:* Right now its uploaded as a single cleaned gzip file (for faster usage), I'll update this in the future to include a huggingface datasets compatible class and better support for japanese than the existing wikipedia repo.
6
 
7
+ ### Example use case:
8
+
9
+ ```shell
10
+ gunzip jawwiki20200808.json.gz
11
+ ```
12
+
13
+ ```python
14
+ import pandas as pd
15
+ from datasets import load_dataset
16
+ df = pd.read_json(path_or_buf="jawiki20220808.json", lines=True)
17
+ # *your preprocessing here*
18
+ df.to_csv("jawiki.csv", index=False)
19
+ dataset = load_dataset("csv", data_files="jawiki.csv")
20
+ dataset['train'][0]
21
+ ```
22
+
23
+
24
  The wikipedia articles were processed from their compressed format into a 7 GB jsonl file with filtering removing extraneous characters using the repo: https://github.com/singletongue/WikiCleaner.
25
 
26
  Sample Text: