.gitignore CHANGED
@@ -1,9 +1,3 @@
1
  # Python
2
  __pycache__/*
3
  *.pyc
4
-
5
- # cSpell
6
- cspell.json
7
-
8
- # tmp files
9
- tmp.py
 
1
  # Python
2
  __pycache__/*
3
  *.pyc
 
 
 
 
 
 
CONTRIBUTING.md DELETED
@@ -1,56 +0,0 @@
1
- ## Working with dataset locally
2
-
3
- A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
4
-
5
- ```bash
6
- git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
7
- cd danish-gigaword-2
8
- ```
9
-
10
- You can the work with the dataset locally like so:
11
-
12
- ```py
13
- from datasets import load_dataset
14
-
15
- name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
16
- dataset = load_dataset("../.", split="train")
17
- # make transformations here
18
- ```
19
-
20
- > Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
21
-
22
- ## Installing dependencies
23
-
24
- This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
25
-
26
- ```bash
27
- make install
28
- ```
29
-
30
- ## Running dataset tests
31
-
32
- This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
33
-
34
- ```bash
35
- make test
36
- ```
37
-
38
- ## Submitting a PR
39
-
40
- Creating a PR on Huggingface is a bit different from creating one on Github.
41
-
42
- 1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
43
-
44
- ```bash
45
- git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
46
- git checkout pr/{PR NUMBER}
47
- # make your changes here
48
- # push to hub
49
- git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
- ```
51
-
52
- Before you make the PR do be sure to make sure that the tests have been run.
53
-
54
- To see example PR you can see the following:
55
-
56
- - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions/11)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -99,95 +99,59 @@ task_categories:
99
  - text-generation
100
  task_ids:
101
  - language-modeling
102
- pretty_name: Danish Dynaword
103
  language_bcp47:
104
  - da
105
  - da-bornholm
106
  - da-synnejyl
107
  ---
108
 
109
- <!--
110
- readme structure is inspired by:
111
- https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
112
-
113
- # 🧨 Danish Dynaword
114
 
115
- | | |
116
- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
117
- | **Language** | dan, dansk, Danish |
118
- | **License** | Permissible, See the respective dataset |
119
- | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
120
- | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions) |
121
 
 
122
 
123
  ## Table of Contents
124
- - [🧨 Danish Dynaword](#-danish-dynaword)
125
  - [Table of Contents](#table-of-contents)
126
  - [Dataset Description](#dataset-description)
127
  - [Dataset Summary](#dataset-summary)
128
  - [Loading the dataset](#loading-the-dataset)
129
- - [Languages:](#languages)
130
  - [Dataset Structure](#dataset-structure)
131
  - [Data Instances](#data-instances)
132
  - [Data Fields](#data-fields)
133
  - [Data Splits](#data-splits)
134
  - [Dataset Creation](#dataset-creation)
135
- - [Curation Rationale](#curation-rationale)
136
- - [Annotations](#annotations)
137
  - [Source Data](#source-data)
138
  - [Additional Information](#additional-information)
139
- - [Contributing to the dataset](#contributing-to-the-dataset)
140
  - [Citation Information](#citation-information)
141
 
142
  ## Dataset Description
143
 
 
144
 
145
  ### Dataset Summary
146
 
147
- The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
148
-
149
 
150
  ### Loading the dataset
151
 
152
  ```py
153
  from datasets import load_dataset
154
 
155
- name = "danish-foundation-models/danish-dynaword"
156
  ds = load_dataset(name, split = "train")
157
  sample = ds[1] # see "Data Instances" below
158
- ```
159
 
160
- or load it by streaming the data
161
- ```py
162
  ds = load_dataset(name, split = "train", streaming=True)
163
- dataset_iter = iter(ds)
164
- sample = next(iter(dataset_iter))
165
  ```
166
 
167
- You can also load a single subset at a time:
168
- ```py
169
- ds = load_dataset(name, "adl", split = "train")
170
- ```
171
-
172
-
173
- As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision:
174
- You can also load a single subset at a time:
175
- ```py
176
- ds = load_dataset(name, revision="{desired revision}")
177
- ```
178
-
179
- ### Languages:
180
- This dataset includes the following languages:
181
-
182
- - dan-Latn
183
- - dan-Latn-bornholm
184
- - dan-Latn-synnejyl
185
-
186
- Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
187
-
188
  ## Dataset Structure
189
 
190
- The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
191
 
192
  ### Data Instances
193
 
@@ -195,14 +159,15 @@ Each entry in the dataset consists of a single text with associated metadata
195
 
196
  ```py
197
  {
198
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...",
199
- "source": "adl",
200
- "id": "adl_aakjaer06val",
201
- "added": "2020-09-14",
202
- "created": "1700-01-01, 2022-01-01",
203
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
204
- "domain": "Wiki & Books",
205
- "metadata": {"source-pretty": "Archive for Danish Literature"},
 
206
  }
207
  ```
208
 
@@ -212,13 +177,12 @@ An entry in the dataset consists of the following fields:
212
 
213
  - `text`(`str`): The content of the document.
214
  - `source` (`str`): The source of the document (see [Source Data](#source-data)).
215
- - `id` (`str`): An unique identifier for each document.
216
  - `added` (`str`): An date for when the document was added to this collection.
217
  - `created` (`str`): An date range for when the document was originally created.
218
- - `license` (`str`): The license of the document. The licenses vary according to the source.
219
- - `domain` (`str`): The domain of the source
220
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
221
- - `metadata/*`: Potentially additional metadata
222
 
223
 
224
  ### Data Splits
@@ -227,54 +191,128 @@ The entire corpus is provided in the `train` split.
227
 
228
  ## Dataset Creation
229
 
230
- ### Curation Rationale
231
-
232
- These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
233
-
234
- ### Annotations
235
-
236
- This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.
237
-
238
  ### Source Data
239
 
240
  Below follows a brief overview of the sources in the corpus along with their individual license.
241
 
242
- | Source | License |
243
- | ----------------- | -------------------------------------------------------- |
244
- | adl | Creative Commons Legal Code 1.0 Universal |
245
- | botxt | Creative Commons Legal Code 1.0 Universal |
246
- | dannet | [dannet license] |
247
- | depbank | Attribution-ShareAlike 4.0 International |
248
- | ep | Creative Commons Legal Code 1.0 Universal |
249
- | ft | Creative Commons Legal Code 1.0 Universal |
250
- | gutenberg | [gutenberg license] |
251
- | hest | Creative Commons Legal Code 1.0 Universal |
252
- | jvj | Attribution-ShareAlike 4.0 International |
253
- | naat | Creative Commons Legal Code 1.0 Universal |
254
- | relig | Creative Commons Legal Code 1.0 Universal |
255
- | retsinformationdk | [Other (Danish Law)] |
256
- | retspraksis | Creative Commons Legal Code 1.0 Universal |
257
- | skat | Creative Commons Legal Code 1.0 Universal |
258
- | spont | Creative Commons Legal Code 1.0 Universal |
259
- | synne | Creative Commons Legal Code 1.0 Universal |
260
- | tv2r | [Custom, Creative Commons Attribution 4.0 International] |
261
- | wiki | Creative Commons Legal Code 1.0 Universal |
262
- | wikibooks | Creative Commons Legal Code 1.0 Universal |
263
- | wikisource | Creative Commons Legal Code 1.0 Universal |
264
-
265
- [Custom, Creative Commons Attribution 4.0 International]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/tv2r/tv2r.md#license-information
266
- [gutenberg license]: https://www.gutenberg.org/policy/license.html
267
- [dannet license]: https://cst.ku.dk/projekter/dannet/license.txt
268
- [Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information
269
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
270
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
271
 
272
  ## Additional Information
273
 
274
- ### Contributing to the dataset
275
-
276
- We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
277
 
278
  ### Citation Information
279
 
280
- This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
  - text-generation
100
  task_ids:
101
  - language-modeling
102
+ pretty_name: Danish Gigaword
103
  language_bcp47:
104
  - da
105
  - da-bornholm
106
  - da-synnejyl
107
  ---
108
 
109
+ # Danish Gigaword 2
 
 
 
 
110
 
111
+ *Version*: 2.0.0
 
 
 
 
 
112
 
113
+ *License*: See the respective dataset
114
 
115
  ## Table of Contents
116
+ - [Danish Gigaword 2](#danish-gigaword-2)
117
  - [Table of Contents](#table-of-contents)
118
  - [Dataset Description](#dataset-description)
119
  - [Dataset Summary](#dataset-summary)
120
  - [Loading the dataset](#loading-the-dataset)
 
121
  - [Dataset Structure](#dataset-structure)
122
  - [Data Instances](#data-instances)
123
  - [Data Fields](#data-fields)
124
  - [Data Splits](#data-splits)
125
  - [Dataset Creation](#dataset-creation)
 
 
126
  - [Source Data](#source-data)
127
  - [Additional Information](#additional-information)
 
128
  - [Citation Information](#citation-information)
129
 
130
  ## Dataset Description
131
 
132
+ This is intended as a second version of the Danish Gigaword corpus. It is intended to be continually updated with new data sources. This is currently a work in progress.
133
 
134
  ### Dataset Summary
135
 
136
+ The Danish Gigaword Corpus contains text spanning several domains and forms.
 
137
 
138
  ### Loading the dataset
139
 
140
  ```py
141
  from datasets import load_dataset
142
 
143
+ name = "danish-foundation-models/danish-gigaword"
144
  ds = load_dataset(name, split = "train")
145
  sample = ds[1] # see "Data Instances" below
 
146
 
147
+ # or load by streaming the data
 
148
  ds = load_dataset(name, split = "train", streaming=True)
149
+ sample = next(iter(ds))
 
150
  ```
151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
  ## Dataset Structure
153
 
154
+ The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information.
155
 
156
  ### Data Instances
157
 
 
159
 
160
  ```py
161
  {
162
+ 'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.',
163
+ 'source': 'wiki',
164
+ 'id': 'wiki_366127',
165
+ 'added': '2021-03-28',
166
+ 'created': '2019-01-01, 2021-01-01',
167
+ 'metadata':
168
+ {'domain': 'Wiki & Books',
169
+ 'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
170
+ }
171
  }
172
  ```
173
 
 
177
 
178
  - `text`(`str`): The content of the document.
179
  - `source` (`str`): The source of the document (see [Source Data](#source-data)).
180
+ - `id` (`str`): An unique identifer for each document.
181
  - `added` (`str`): An date for when the document was added to this collection.
182
  - `created` (`str`): An date range for when the document was originally created.
183
+ - `metadata/license` (`str`): The license of the document. The licenses vary according to the source.
184
+ - `metadata/domain` (`str`): The domain of the source
185
+ - `metadata/source-pretty` (`str`): The longform version of the short-form source name
 
186
 
187
 
188
  ### Data Splits
 
191
 
192
  ## Dataset Creation
193
 
 
 
 
 
 
 
 
 
194
  ### Source Data
195
 
196
  Below follows a brief overview of the sources in the corpus along with their individual license.
197
 
198
+ | Source | License |
199
+ | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
200
+ | adl | Creative Commons Legal Code 1.0 Universal |
201
+ | botxt | Creative Commons Legal Code 1.0 Universal |
202
+ | dannet | [dannet license](https://cst.ku.dk/projekter/dannet/license.txt) |
203
+ | depbank | Attribution-ShareAlike 4.0 International |
204
+ | ep | Creative Commons Legal Code 1.0 Universal |
205
+ | ft | Creative Commons Legal Code 1.0 Universal |
206
+ | gutenberg | [gutenberg license](https://www.gutenberg.org/policy/license.html) |
207
+ | hest | Creative Commons Legal Code 1.0 Universal |
208
+ | jvj | Attribution-ShareAlike 4.0 International |
209
+ | naat | Creative Commons Legal Code 1.0 Universal |
210
+ | relig | Creative Commons Legal Code 1.0 Universal |
211
+ | retsinformationdk | Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler." |
212
+ | retspraksis | Creative Commons Legal Code 1.0 Universal |
213
+ | skat | Creative Commons Legal Code 1.0 Universal |
214
+ | spont | Creative Commons Legal Code 1.0 Universal |
215
+ | synne | Creative Commons Legal Code 1.0 Universal |
216
+ | tv2r | The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International |
217
+ | wiki | Creative Commons Legal Code 1.0 Universal |
218
+ | wikibooks | Creative Commons Legal Code 1.0 Universal |
219
+ | wikisource | Creative Commons Legal Code 1.0 Universal |
220
+
221
+ These sources corresponds to the following top-level domains in the dataset:
222
+ ```python
223
+ # mapping from domain to top-level domain
224
+ domain_mapping_dict = {
225
+ "retsinformationdk": "Legal",
226
+ "skat": "Legal",
227
+ "retspraksis": "Legal",
228
+ "hest": "Social Media",
229
+ "cc": "Web",
230
+ "adl": "Wiki & Books",
231
+ "botxt": "Other",
232
+ "danavis": "News",
233
+ "dannet": "dannet",
234
+ "depbank": "Other",
235
+ "ep": "Conversation",
236
+ "ft": "Conversation",
237
+ "gutenberg": "Wiki & Books",
238
+ "jvj": "Wiki & Books",
239
+ "naat": "Conversation",
240
+ "opensub": "Conversation",
241
+ "relig": "Wiki & Books",
242
+ "spont": "Conversation",
243
+ "synne": "Other",
244
+ "tv2r": "News",
245
+ "wiki": "Wiki & Books",
246
+ "wikibooks": "Wiki & Books",
247
+ "wikisource": "Wiki & Books",
248
+ "twfv19": "Social Media", # not present in this version of the dataset
249
+ }
250
+ ```
251
 
252
+ And the following mapping translates between the short form and the long form of the source name
253
+ ```python
254
+ # mapping from domain to its long name format
255
+ longname_mapping_dict = {
256
+ "retsinformationdk": "retsinformation.dk (Danish legal information)",
257
+ "skat": "Skat (Danish tax authority)",
258
+ "retspraksis": "retspraksis (Danish legal information)",
259
+ "hest": "Hestenettet (Danish debate forum)",
260
+ "cc": "Common Crawl",
261
+ "adl": " Archive for Danish Literature",
262
+ "botxt": "Bornholmsk (Danish dialect)",
263
+ "danavis": "Danish daily newspapers",
264
+ "dannet": "DanNet (Danish WordNet)",
265
+ "depbank": "Danish Dependency Treebank",
266
+ "ep": "European Parliament",
267
+ "ft": "Folketinget (Danish Parliament)",
268
+ "gutenberg": "Gutenberg",
269
+ "jvj": "Johannes V. Jensen (Danish author/poet)",
270
+ "naat": "NAAT",
271
+ "opensub": "Open Subtitles",
272
+ "relig": "Religious texts",
273
+ "spont": "Spontaneous speech",
274
+ "synne": "Synderjysk (Danish dialect)",
275
+ "tv2r": "TV 2 Radio (Danish news)",
276
+ "wiki": "Wikipedia",
277
+ "wikibooks": "Wikibooks",
278
+ "wikisource": "Wikisource",
279
+ "twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
280
+ }
281
+ ```
282
 
283
  ## Additional Information
284
 
 
 
 
285
 
286
  ### Citation Information
287
 
288
+ The original version of Danish Gigawords was created as a part of the following publication.
289
+
290
+ > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
291
+
292
+ ```
293
+ @inproceedings{dagw,
294
+ title = {{The Danish Gigaword Corpus}},
295
+ author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
296
+ year = 2021,
297
+ booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
298
+ publisher = {NEALT}
299
+ }
300
+ ```
301
+
302
+
303
+ <!--
304
+ Todo:
305
+
306
+ add tests
307
+ - unique ids
308
+ - valid metadata
309
+
310
+ add ci:
311
+ - summary statistics
312
+ - tables
313
+
314
+ prettify:
315
+ - license as independent column
316
+ - ensure pretty_name is standard
317
+ - potentially remove some columns
318
+ -->
data/adl/adl.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5af9444529d92c37f35161829c652f8b928f9f1dfb5836065f320d1e1d698818
3
- size 106401744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d51c291d1cf6461a1e59dd45dfd63ee39a5c62cd3c2fd05877489d50aaa5115e
3
+ size 106409966
data/botxt/botxt.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ec89c1dd57f1987dc6fe059a33a1d16b41b8c87439673a381f9671497f65b017
3
- size 1344033
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b42642896dfda21b23bb8e8ef5ba65f878ebfa5fec2f6d57aec1e06778c75bbf
3
+ size 1353171
data/dannet/dannet.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b9006617e35f568e7b7e4dacc87c4a490cf0a9170bd4e91488de77e00d3fb38c
3
- size 4487008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:905c2441a4c242e24d370775e9e035df3c67a7a1d797a615297cb6a1bbf51a96
3
+ size 4743422
data/depbank/depbank.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d4172e2ab4d7256ca5b76ad45b4d7326616e6679642056fdef20c5e3a8b1c62
3
- size 392216
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:863aac5735bee6995b665864ea355b488e35bb2cca696ea340d8febc653b8886
3
+ size 394917
data/ep/ep.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f76e86335bd765b3ff3cf5ccdfe8f220e39349a0344fdf2b9918adbdd96aedeb
3
- size 170796385
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85c8eb6954522c757ee3e410f7f277a74ecedd8e7507ef00a698a654dc8bea20
3
+ size 171150568
data/ft/ft.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e46276c575c7d9ddc30f44111206d250cb02473c992d0087bf0a9a5f4266da18
3
- size 181926375
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31775c6e84a1542897641712e39d4c6cde2aa69673d7875c6a39f3148c08e0fb
3
+ size 182049520
data/gutenberg/gutenberg.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e8364195e60b64e285d0c1b8c4b6ae0da7a1b6165de77bb4fc4049c317b445c
3
- size 12342492
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973df5121d3da73a5915f6dd1da0290ffbaece92b2c7c4dec562155974c0076f
3
+ size 12361984
data/hest/hest.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:258c9263b68b8d8573eab1eaa8221c557e9259aa1a222911fdff41f5cbbda66b
3
- size 747678214
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b85d658074ebec3eb95da8f8e522d83707b646b5f3b8b706279496eec3b31c3
3
+ size 748670544
data/jvj/jvj.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5706ac4ddb20ce41ac198d3a603c80a7ab76e8a84d028bf145934a704401e17d
3
- size 6824089
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a524aafe8fe1ba86bc09c091b10aacf55e558124fef59e68f60bed03816636a
3
+ size 6829395
data/naat/naat.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc7c4b8640c72a20abba667d9630fe8d234266a7d42f50a9a20be28b1e0ecff6
3
- size 544392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6958784a0c4039e9357dee0dedc6bd010e7dd3573d2d9a4db45ce5e4a6608feb
3
+ size 545253
data/relig/relig.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d7c636a36f5eb1376ffebf2f1f83b82ed3d3860ef1f87b55c7f8ccf894fbc844
3
- size 2001056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba59db9efa6756fd6306380c39e9f25b50c99ddb6b7c0c2391e417d95d0af6da
3
+ size 2003050
data/retsinformationdk/retsinformationdk.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b2dfb08646a54df30fb1e2be1bbcd50a30ba02378ef35014345ae25959f2241
3
- size 648816450
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69df3e71d482c746854535710ffb57c9ba3c9ac633931222e8be84d0e67cc22c
3
+ size 651256719
data/retspraksis/retspraksis.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f37fcd08d69abbf4329033d15b57efd3ce83dd9c6d55a339529888014fae827
3
- size 87201467
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28f86c894204d6c1348a5fdfae7b69d1d355ba311e42d70fd669d52138b95d3a
3
+ size 87674092
data/skat/skat.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e4ced905b6b9629b2c7c62d94eda1413de90603f7f60c762837aee5fd182896e
3
- size 164723225
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f87f38f90553725c889080b3def8e24dadd3b2eaee28b43bae2a19493cf2143
3
+ size 165069920
data/spont/spont.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:149bb5b9f18a2883995bfd3584cee88079a8ececb4c5b6c51f778aa34092bcf6
3
- size 1805872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ac515b1dedc78fb9123bffbab2cf3c0fe1e126a070ad342d7d0c707096e838b
3
+ size 1814921
data/synne/synne.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:96a43b9bca159540a0b27c9a5488192cfb29c02e6e147f82e00a8a3204e9b9ce
3
- size 74311
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:701bf010bca88dd4722ffa72404b91e24703bd9552003371771bf1823dc58138
3
+ size 77042
data/tv2r/tv2r.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ad0c74fae026560fec279e0a6fd7821bd1af18c864f53925cbe9fefd254f64d0
3
- size 40341900
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5cc87b9de1c11ef580d939d1d877b5553d3c75aa33d0f2e280986b8787900a5
3
+ size 40686259
data/wiki/wiki.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:995d81c1316c5b2b8852c9c3a9404e0bd2de00ee0e79b86197b1fe20b6999469
3
- size 241828019
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41bb02c5b10290746b00750db69c565bfe25fda2529efcc603f108d820dc6c13
3
+ size 242917206
data/wikibooks/wikibooks.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:478e98d6450700d5248be68bb9cc8739ec4395071a43ce71f5f69bdc3e15cac0
3
- size 11262962
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5984554e9c048e06cd156903a345178dc18a95572e3b12fb4c6e6266bcc87fa5
3
+ size 11282733
data/wikisource/wikisource.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9c42ebd7754dd6cdda5259b8d927b581b463f998d3fb13aeeb755015b870cc4
3
- size 9480324
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4ee7ec0bb3147f06617c94a8951055a5a806c7917de229d6b2ec2df9c4c0b73
3
+ size 9488335
paper/paper.md DELETED
@@ -1,136 +0,0 @@
1
- # Danish DynaWord: Moving from one-shot dataset Continously developed datasets
2
-
3
- Authors:
4
-
5
- This list of authors to be invited for co-authorship
6
-
7
- CHC
8
- - Kenneth Enevoldsen
9
- - Jan Kostkan
10
- - Per
11
- - Kristoffer Nielbo
12
- - Marton
13
- - Martin (gode CI tanker)
14
-
15
- Alexandra:
16
- - Dan Nielsen
17
- - Rasmus
18
- - Peter
19
- - Kristian
20
- - Torben
21
-
22
- DFM
23
- - Bolette Pedersen (eller nogen fra hendes gruppe)
24
- - Desmond
25
- - Peter
26
-
27
- Danish Royal Library? Other organization that are important to include?
28
-
29
- # Abstract
30
-
31
- In this work we introduce dynaword an argument for moving toward continously developed dataset as opposed to current release and forget datasets.
32
- As an example we release Danish DynaWord
33
-
34
- dataset is available at: LINK
35
-
36
- # Introduction
37
-
38
- Current datasets
39
- While creating a current
40
-
41
- Current methods for dataset creation tacke only a small [@joshiStateFateLinguistic2020]
42
- In the project we specifically choose to focus on the low to mid-resource language Danish (dan). We see two reasons for doing this:
43
-
44
- - The dynaword approach is most likely to be beneficial for low to mid resourced languages (class 2-4; @joshiStateFateLinguistic2020) which have contributors able and willing to contribute and where the domain high resource languages (class 5; @joshiStateFateLinguistic2020) could likely sustain multiple dynaword project targeting specific domains.
45
- - not only for Danish b
46
-
47
- While it is in theory possible to open a PR on existing dataset, this practice is often rare and instead we often see improvements on the existing dataset published (see e.g. [@pascal_alie_kenneth_et_paper], [@that_guy_that_added_langauge_tag_to_a_dataset]). These derivative works rarely get as many downloads as the original
48
-
49
- Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent.
50
-
51
- ## Related work
52
-
53
-
54
- ### Existing approaches in Dataset development
55
-
56
- Large project like OSCAR [@OSCAR], HPLT [@hplt], and fineweb [@fineweb] release iterative version of dataset derived from commoncrawl [@commoncrawl].
57
- These approaches make it hard to contributors to join contribute and siloes dataset development in a few institutions. Furthermore the focus
58
- commoncrawl ignores other valuable resources such as public APIs and comes with a slew of ethical and legal concerns [@missing] which effect only the usefulness of the datasets but also the models derived from these.
59
- While these resources such as individual dataset derived from APIs would be extensive to collect for individual groups as they rarely offer enough data to be worth the time opening up this approach to a community makes these approaches more viable.
60
-
61
-
62
- Opening up development pipeline also increases openness around the dataset collection. ADD SOMETHING on inclusion here.
63
-
64
- Read up on fineweb!!! (I assume they do some CI)
65
-
66
- Other successful open-source project: dependency treebank project [@dep_treebank], ...
67
-
68
- Existing projects on open-licensed data [@elutherAI]
69
-
70
- We note that our approach is complementary to existing projects such as fineweb
71
-
72
- ### Continuous Integration
73
-
74
- Do we need a section on this?
75
-
76
- ### Danish and Scandinavian Datasets
77
-
78
- Lacunae of danish [@cite]
79
- Danish gigaword [@dagw]
80
- Swedish gigaword? [@swedish]
81
- NCC [@ncc_kummervold]
82
-
83
-
84
- Existing benchmark covering Scandinavian languages such as ScandEval [@scandeval; @scandeval2] and SEB [@seb] argue that reasonable to evalaute on the
85
-
86
- # Methods
87
-
88
- ## Continuous Integration
89
-
90
- Our approach for continuous integration, how to submit, what we test for.
91
-
92
-
93
- # Results
94
-
95
- ## Dataset collection
96
-
97
- Current collection.
98
-
99
- | Source | Date | Domain | License | Size |
100
- | --------------- | ---------- | -------------- | ------- | ---- |
101
- | **Legal** | | | | |
102
- | Retsinformation | date range | Legal, Written | | 188M |
103
- | ... | | | | |
104
- | **Total** | | | | |
105
-
106
-
107
- For a description of each dataset we refer to the public repository.
108
- <!-- we could also include -->
109
-
110
- # Conclusion
111
-
112
- ## Dataset delivery
113
-
114
- # Limitation
115
-
116
- - Is danish too limited: Should we consider multilingual sources, scandinavian, germanic, English
117
-
118
- - Size:
119
- - The size is currently limited if the size grows to large developing becomes problematic
120
- - This is still way smaller than what could be extracted from CC
121
-
122
- - Only Danish: While developing CI for datasets is by no means new [@missing] doing so for open pre-training datasets open a collaborative fashion has
123
- not been tested on a larger scale. Once the approach has been validated we plan to host a collaboration along with huggingface to develop these dataset sources.
124
-
125
- - Huggingface datasets as a development platform for datasets: Througout this work it was clear to many of the developers that the ease of contributing minor changes (e.g. filtering out a few bad examples) was both hard to create a PRs for and hard to review often requiring the reviewer to simply trust that the user did what was stated in the commit message. While previous projects have tackled this issue using human readable formats [@dep_treebank], due to the scope of the dataset this would quickly become inefficient.
126
- This lack of clarity increased the likelihood of dataset attacks such as dataset poisoning [@missing]. We expect to see both interface development and software development to detect and prevent such attacks.
127
-
128
- - Machine generated content within training data: Not
129
-
130
-
131
- Ethical and Environmental consideration
132
-
133
- enviromental:
134
- - common codebase lead to less duplication of dataset and reduces storage required
135
- - continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.
136
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
paper/references.bib DELETED
@@ -1,25 +0,0 @@
1
-
2
- @article{joshiStateFateLinguistic2021,
3
- title = {The {State} and {Fate} of {Linguistic} {Diversity} and {Inclusion} in the {NLP} {World}},
4
- url = {http://arxiv.org/abs/2004.09095},
5
- abstract = {Language technologies contribute to promoting multilingualism and linguistic diversity around the world. However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications. In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time. Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the "language agnostic" status of current models and systems. Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind.},
6
- urldate = {2021-03-20},
7
- journal = {arXiv:2004.09095 [cs]},
8
- author = {Joshi, Pratik and Santy, Sebastin and Budhiraja, Amar and Bali, Kalika and Choudhury, Monojit},
9
- month = jan,
10
- year = {2021},
11
- note = {arXiv: 2004.09095},
12
- keywords = {Computer Science - Computation and Language},
13
- }
14
-
15
- @inproceedings{dagw,
16
- title = {The {{Danish Gigaword}} Corpus},
17
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics ({{NoDaLiDa}})},
18
- author = {{Str{\o}mberg-Derczynski}, Leon and Ciosici, Manuel and Baglini, Rebekah and Christiansen, Morten H. and Dalsgaard, Jacob Aarup and Fusaroli, Riccardo and Henrichsen, Peter Juel and Hvingelby, Rasmus and Kirkedal, Andreas and Kjeldsen, Alex Speed and Ladefoged, Claus and Nielsen, Finn Aarup and Madsen, Jens and Petersen, Malte Lau and Rystr{\o}m, Jonathan Hvithamar and Varab, Daniel},
19
- year = {05 31--2 06 2021},
20
- pages = {413--421},
21
- publisher = {Link{\"o}ping University Electronic Press, Sweden},
22
- address = {Reykjavik, Iceland (Online)},
23
- abstract = {Danish language technology has been hindered by a lack of broad-coverage corpora at the scale modern NLP prefers. This paper describes the Danish Gigaword Corpus, the result of a focused effort to provide a diverse and freely-available one billion word corpus of Danish text. The Danish Gigaword corpus covers a wide array of time periods, domains, speakers' socio-economic status, and Danish dialects.},
24
- file = {/Users/au561649/Zotero/storage/9B3GVP6D/Derczynski et al. - The Danish Gigaword Corpus.pdf}
25
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pyproject.toml CHANGED
@@ -1,7 +1,7 @@
1
  [project]
2
- name = "danish-dynaword"
3
- version = "1.0.3"
4
- description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.13"
7
  dependencies = [
@@ -12,5 +12,5 @@ dependencies = [
12
  "plotnine>=0.14.3",
13
  "pytest>=8.3.4",
14
  "seaborn>=0.13.2",
15
- "tomlkit>=0.13.2",
16
  ]
 
1
  [project]
2
+ name = "danish-gigaword-2"
3
+ version = "1.0.1"
4
+ description = "project code for the danish gigaword 2 project"
5
  readme = "README.md"
6
  requires-python = ">=3.13"
7
  dependencies = [
 
12
  "plotnine>=0.14.3",
13
  "pytest>=8.3.4",
14
  "seaborn>=0.13.2",
15
+ "toml>=0.10.2",
16
  ]
scripts/bump_version.py CHANGED
@@ -1,15 +1,16 @@
1
  from packaging.version import Version
2
  from pathlib import Path
3
 
4
- import tomlkit
 
5
  c_file = Path(__file__)
6
  pyproject = c_file.parent.parent / "pyproject.toml"
7
 
8
 
9
  with pyproject.open("r") as f:
10
- data = tomlkit.load(f)
11
  version = Version(data["project"]["version"])
12
  data["project"]["version"] = str(Version(f"{version.major}.{version.minor}.{version.micro + 1}"))
13
 
14
  with pyproject.open("w") as f:
15
- tomlkit.dump(data, f)
 
1
  from packaging.version import Version
2
  from pathlib import Path
3
 
4
+ import toml
5
+
6
  c_file = Path(__file__)
7
  pyproject = c_file.parent.parent / "pyproject.toml"
8
 
9
 
10
  with pyproject.open("r") as f:
11
+ data = toml.load(f)
12
  version = Version(data["project"]["version"])
13
  data["project"]["version"] = str(Version(f"{version.major}.{version.minor}.{version.micro + 1}"))
14
 
15
  with pyproject.open("w") as f:
16
+ toml.dump(data, f)
scripts/load_dataset.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+
3
+ name = "../." # "danish-foundation-models/danish-gigaword"
4
+ ds = load_dataset("../.", split = "train")
5
+
6
+ ds
uv.lock CHANGED
@@ -198,8 +198,8 @@ wheels = [
198
  ]
199
 
200
  [[package]]
201
- name = "danish-dynaword"
202
- version = "1.0.2"
203
  source = { virtual = "." }
204
  dependencies = [
205
  { name = "datasets" },
@@ -209,7 +209,7 @@ dependencies = [
209
  { name = "plotnine" },
210
  { name = "pytest" },
211
  { name = "seaborn" },
212
- { name = "tomlkit" },
213
  ]
214
 
215
  [package.metadata]
@@ -221,7 +221,7 @@ requires-dist = [
221
  { name = "plotnine", specifier = ">=0.14.3" },
222
  { name = "pytest", specifier = ">=8.3.4" },
223
  { name = "seaborn", specifier = ">=0.13.2" },
224
- { name = "tomlkit", specifier = ">=0.13.2" },
225
  ]
226
 
227
  [[package]]
@@ -1071,12 +1071,12 @@ wheels = [
1071
  ]
1072
 
1073
  [[package]]
1074
- name = "tomlkit"
1075
- version = "0.13.2"
1076
  source = { registry = "https://pypi.org/simple" }
1077
- sdist = { url = "https://files.pythonhosted.org/packages/b1/09/a439bec5888f00a54b8b9f05fa94d7f901d6735ef4e55dcec9bc37b5d8fa/tomlkit-0.13.2.tar.gz", hash = "sha256:fff5fe59a87295b278abd31bec92c15d9bc4a06885ab12bcea52c71119392e79", size = 192885 }
1078
  wheels = [
1079
- { url = "https://files.pythonhosted.org/packages/f9/b6/a447b5e4ec71e13871be01ba81f5dfc9d0af7e473da256ff46bc0e24026f/tomlkit-0.13.2-py3-none-any.whl", hash = "sha256:7a974427f6e119197f670fbbbeae7bef749a6c14e793db934baefc1b5f03efde", size = 37955 },
1080
  ]
1081
 
1082
  [[package]]
 
198
  ]
199
 
200
  [[package]]
201
+ name = "danish-gigaword-2"
202
+ version = "1.0.1"
203
  source = { virtual = "." }
204
  dependencies = [
205
  { name = "datasets" },
 
209
  { name = "plotnine" },
210
  { name = "pytest" },
211
  { name = "seaborn" },
212
+ { name = "toml" },
213
  ]
214
 
215
  [package.metadata]
 
221
  { name = "plotnine", specifier = ">=0.14.3" },
222
  { name = "pytest", specifier = ">=8.3.4" },
223
  { name = "seaborn", specifier = ">=0.13.2" },
224
+ { name = "toml", specifier = ">=0.10.2" },
225
  ]
226
 
227
  [[package]]
 
1071
  ]
1072
 
1073
  [[package]]
1074
+ name = "toml"
1075
+ version = "0.10.2"
1076
  source = { registry = "https://pypi.org/simple" }
1077
+ sdist = { url = "https://files.pythonhosted.org/packages/be/ba/1f744cdc819428fc6b5084ec34d9b30660f6f9daaf70eead706e3203ec3c/toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f", size = 22253 }
1078
  wheels = [
1079
+ { url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588 },
1080
  ]
1081
 
1082
  [[package]]