Alienmaster commited on
Commit
f0d4375
1 Parent(s): 8fd1cea

more splits added

Browse files
Files changed (5) hide show
  1. 100k.parquet +3 -0
  2. 10k.parquet +3 -0
  3. 1mio.parquet +3 -0
  4. 30k.parquet +3 -0
  5. README.md +11 -5
100k.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66ab6cc1f406aabccd9839f9d04f3d3dd6a3ebf8385e8fadfc740d31e63bac71
3
+ size 8718951
10k.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b5ec79e6fa0f153e9cdfb4c89223ed14b7aedc4c5bb4c44e0d9266510aed6c7
3
+ size 897512
1mio.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31f513036270acda482f744d09ecc236e396c86f82eaf537d39899ad6d21adfd
3
+ size 83692241
30k.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a0d99a26b20afdfce5e43b0203b390c81ae83bc61269bcf716992321335bc17
3
+ size 2657754
README.md CHANGED
@@ -8,16 +8,22 @@ size_categories:
8
  - 100K<n<1M
9
  task_categories:
10
  - text-classification
11
- pretty_name: Leipzig Corpora Wikipedia 2016 1 Million Sentences German
12
  configs:
13
  - config_name: default
14
  data_files:
15
- - split: full
16
- path: "full.parquet"
 
 
 
 
 
 
17
  ---
18
- ## Leipzig Corpora Wikipedia 2016 1 Million Sentences German
19
 
20
- This dataset contains one million sentences from the german wikipedia. The data were collected 2016.
21
  Every element in the dataset is labeled as "neutral".
22
 
23
  The source can be found [here](https://wortschatz.uni-leipzig.de/de/download/German)
 
8
  - 100K<n<1M
9
  task_categories:
10
  - text-classification
11
+ pretty_name: Leipzig Corpora Wikipedia 2016 German
12
  configs:
13
  - config_name: default
14
  data_files:
15
+ - split: 10k
16
+ path: "10k.parquet"
17
+ - split: 30k
18
+ path: "30k.parquet"
19
+ - split: 100k
20
+ path: "100k.parquet"
21
+ - split: 1mio
22
+ path: "1mio.parquet"
23
  ---
24
+ ## Leipzig Corpora Wikipedia 2016 German
25
 
26
+ This dataset contains different splits (between 10k and 1mio) from the german wikipedia 2016. The data were collected 2016.
27
  Every element in the dataset is labeled as "neutral".
28
 
29
  The source can be found [here](https://wortschatz.uni-leipzig.de/de/download/German)