parquet-converter commited on
Commit
db7cfe5
1 Parent(s): bc13020

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/train-00000-of-00002.parquet → DDSC--reddit-da/parquet-train-00000-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:64210cb19f7a4c683daf5cc65a131fb6142e5c64e3951a0afdd45234b6525eed
3
- size 171385144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57dd51a62f8a928482d064979415ccc2a96045dfdebb2fea2efd12e38c6934f8
3
+ size 317395419
data/train-00001-of-00002.parquet → DDSC--reddit-da/parquet-train-00001-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5acaeab986ee1720da8e7b7041c4e4f2a02d881f80084ee00fedb6b7e7faecc
3
- size 165206513
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d2acdb989fa2f533de4f8d6d456a8f2204ca7daa64286f5fb302737d196858c
3
+ size 23308662
README.md DELETED
@@ -1,71 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - found
6
- language:
7
- - da
8
- license:
9
- - mit
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-generation
18
- task_ids:
19
- - language-modeling
20
- pretty_name: Reddit-da
21
- ---
22
-
23
- # Dataset Card for SQuAD-da
24
-
25
- ## Table of Contents
26
- - [Table of Contents](#table-of-contents)
27
- - [Dataset Description](#dataset-description)
28
- - [Dataset Summary](#dataset-summary)
29
- - [Languages](#languages)
30
- - [Dataset Structure](#dataset-structure)
31
- - [Data Instances](#data-instances)
32
- - [Data Fields](#data-fields)
33
- - [Additional Information](#additional-information)
34
- - [Dataset Curators](#dataset-curators)
35
- - [Licensing Information](#licensing-information)
36
- - [Contributions](#contributions)
37
-
38
- ### Dataset Summary
39
-
40
- This dataset consists of 1,908,887 Danish posts from Reddit. These are from [this Reddit dump](https://files.pushshift.io/reddit/) and have been filtered using [this script](https://github.com/NBAiLab/notram/blob/master/corpus_generation_scripts/lang_detect_reddit.py), which uses FastText to detect the Danish posts.
41
-
42
- ### Supported Tasks and Leaderboards
43
-
44
- This dataset is suitable for language modelling.
45
-
46
- ### Languages
47
-
48
- This dataset is in Danish.
49
-
50
- ## Dataset Structure
51
-
52
- ### Data Instances
53
-
54
- Every entry in the dataset contains short Reddit comments in Danish, along with a unique ID.
55
-
56
- ### Data Fields
57
-
58
- An entry in the dataset consists of the following fields:
59
- - `id` (`str`): A unique identifier.
60
- - `text` (`str`): A short Reddit comment.
61
-
62
-
63
- ## Additional Information
64
-
65
- ### Licensing Information
66
-
67
- The dataset is released under the MIT license.
68
-
69
- ### Contributions
70
-
71
- Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub.