This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. .gitignore +1 -5
  2. .vscode/settings.json +1 -1
  3. CONTRIBUTING.md +5 -27
  4. README.md +44 -142
  5. data/adl/adl.md +35 -77
  6. data/adl/descriptive_stats.json +0 -1
  7. data/adl/images/dist_document_length.png +0 -3
  8. data/botxt/botxt.md +36 -77
  9. data/botxt/descriptive_stats.json +0 -1
  10. data/botxt/images/dist_document_length.png +0 -3
  11. data/dannet/dannet.md +57 -79
  12. data/dannet/descriptive_stats.json +0 -1
  13. data/dannet/images/dist_document_length.png +0 -3
  14. data/depbank/depbank.md +31 -83
  15. data/depbank/descriptive_stats.json +0 -1
  16. data/depbank/images/dist_document_length.png +0 -3
  17. data/ep/descriptive_stats.json +0 -1
  18. data/ep/ep.md +32 -78
  19. data/ep/images/dist_document_length.png +0 -3
  20. data/ft/descriptive_stats.json +0 -1
  21. data/ft/ft.md +34 -81
  22. data/ft/images/dist_document_length.png +0 -3
  23. data/gutenberg/descriptive_stats.json +0 -1
  24. data/gutenberg/gutenberg.md +337 -97
  25. data/gutenberg/images/dist_document_length.png +0 -3
  26. data/hest/descriptive_stats.json +0 -1
  27. data/hest/hest.md +34 -80
  28. data/hest/images/dist_document_length.png +0 -3
  29. data/jvj/descriptive_stats.json +0 -1
  30. data/jvj/images/dist_document_length.png +0 -3
  31. data/jvj/jvj.md +33 -84
  32. data/lexdk/create.py +0 -78
  33. data/lexdk/descriptive_stats.json +0 -1
  34. data/lexdk/images/dist_document_length.png +0 -3
  35. data/lexdk/lexdk.md +0 -85
  36. data/lexdk/lexdk.parquet +0 -3
  37. data/naat/descriptive_stats.json +0 -1
  38. data/naat/images/dist_document_length.png +0 -3
  39. data/naat/naat.md +32 -74
  40. data/nordjyllandnews/descriptive_stats.json +0 -1
  41. data/nordjyllandnews/images/dist_document_length.png +0 -3
  42. data/nordjyllandnews/nordjyllandnews.md +4 -70
  43. data/opensubtitles/create.py +0 -123
  44. data/opensubtitles/descriptive_stats.json +0 -1
  45. data/opensubtitles/images/dist_document_length.png +0 -3
  46. data/opensubtitles/opensubtitles.md +0 -159
  47. data/opensubtitles/opensubtitles.parquet +0 -3
  48. data/relig/descriptive_stats.json +0 -1
  49. data/relig/images/dist_document_length.png +0 -3
  50. data/relig/relig.md +33 -74
.gitignore CHANGED
@@ -5,9 +5,5 @@ __pycache__/*
5
  # cSpell
6
  cspell.json
7
 
8
- # debugfile
9
- .vscode/launch.json
10
-
11
  # tmp files
12
- tmp.py
13
- tmp.png
 
5
  # cSpell
6
  cspell.json
7
 
 
 
 
8
  # tmp files
9
+ tmp.py
 
.vscode/settings.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "python.testing.pytestArgs": [
3
- "src/tests"
4
  ],
5
  "python.testing.unittestEnabled": false,
6
  "python.testing.pytestEnabled": true
 
1
  {
2
  "python.testing.pytestArgs": [
3
+ "."
4
  ],
5
  "python.testing.unittestEnabled": false,
6
  "python.testing.pytestEnabled": true
CONTRIBUTING.md CHANGED
@@ -3,8 +3,8 @@
3
  A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
4
 
5
  ```bash
6
- git clone https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
7
- cd danish-dynaword
8
  ```
9
 
10
  You can the work with the dataset locally like so:
@@ -12,7 +12,7 @@ You can the work with the dataset locally like so:
12
  ```py
13
  from datasets import load_dataset
14
 
15
- name = "../." # instead of "danish-foundation-models/danish-dynaword"
16
  dataset = load_dataset("../.", split="train")
17
  # make transformations here
18
  ```
@@ -49,30 +49,8 @@ git checkout pr/{PR NUMBER}
49
  git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
  ```
51
 
52
- Before you make the PR do be sure to make sure that you have completed the following checklist.
53
 
54
- ### Checklist
55
-
56
- - [ ] I have run the test suite using `make test` and all tests pass
57
- - [ ] I have added/changed a dataset and have
58
- - [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
59
- - [ ] I have bumped the version use `make bump-version`
60
-
61
- ### Examples of Previous PRs
62
  To see example PR you can see the following:
63
 
64
- - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
65
- - [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
66
- - Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
67
-
68
- ## Frequently asked questions
69
-
70
- ### Do you accept synthetic dataets
71
-
72
- Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
73
- However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
74
- We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
75
-
76
- ### Do you accept non-Danish data
77
-
78
- Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
 
3
  A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
4
 
5
  ```bash
6
+ git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
7
+ cd danish-gigaword-2
8
  ```
9
 
10
  You can the work with the dataset locally like so:
 
12
  ```py
13
  from datasets import load_dataset
14
 
15
+ name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
16
  dataset = load_dataset("../.", split="train")
17
  # make transformations here
18
  ```
 
49
  git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
  ```
51
 
52
+ Before you make the PR do be sure to make sure that the tests have been run.
53
 
 
 
 
 
 
 
 
 
54
  To see example PR you can see the following:
55
 
56
+ - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions/11)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -5,14 +5,6 @@ configs:
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
8
- - config_name: lexdk
9
- data_files:
10
- - split: train
11
- path: data/lexdk/*.parquet
12
- - config_name: opensubtitles
13
- data_files:
14
- - split: train
15
- path: data/opensubtitles/*.parquet
16
  - config_name: retsinformationdk
17
  data_files:
18
  - split: train
@@ -120,18 +112,16 @@ language_bcp47:
120
 
121
  <!--
122
  readme structure is inspired by:
123
- https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
124
- -->
125
-
126
 
127
  # 🧨 Danish Dynaword
128
 
129
- | | |
130
- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
131
- | **Language** | dan, dansk, Danish |
132
- | **License** | Permissible, See the respective dataset |
133
- | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
134
- | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
135
 
136
 
137
  ## Table of Contents
@@ -149,22 +139,12 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
149
  - [Curation Rationale](#curation-rationale)
150
  - [Annotations](#annotations)
151
  - [Source Data](#source-data)
152
- - [Dataset Statistics](#dataset-statistics)
153
  - [Additional Information](#additional-information)
154
  - [Contributing to the dataset](#contributing-to-the-dataset)
155
  - [Citation Information](#citation-information)
156
- - [Disclaimer](#disclaimer)
157
- - [Notice and take down policy](#notice-and-take-down-policy)
158
 
159
  ## Dataset Description
160
 
161
- <!-- START-DESC-STATS -->
162
- - **Language**: dan, dansk, Danish
163
- - **Number of samples**: 588.48K
164
- - **Number of tokens (Llama 3)**: 1.84B
165
- - **Average document length (characters)**: 9222.58
166
- <!-- END-DESC-STATS -->
167
-
168
 
169
  ### Dataset Summary
170
 
@@ -217,19 +197,16 @@ The dataset contains text from different sources which are thoroughly defined in
217
 
218
  Each entry in the dataset consists of a single text with associated metadata
219
 
220
- <!-- START-SAMPLE -->
221
  ```py
222
  {
223
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
224
- "source": "adl",
225
- "id": "adl_aakjaer06val",
226
- "added": "2020-09-14",
227
- "created": "1700-01-01, 2022-01-01",
228
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
229
- "domain": "Wiki & Books",
230
- "metadata": {
231
- "source-pretty": "Archive for Danish Literature"
232
- }
233
  }
234
  ```
235
 
@@ -246,7 +223,7 @@ An entry in the dataset consists of the following fields:
246
  - `domain` (`str`): The domain of the source
247
  - `metadata/source-pretty` (`str`): The long form version of the short-form source name
248
  - `metadata/*`: Potentially additional metadata
249
- <!-- END-SAMPLE -->
250
 
251
  ### Data Splits
252
 
@@ -266,85 +243,34 @@ This data generally contains no annotation besides the metadata attached to each
266
 
267
  Below follows a brief overview of the sources in the corpus along with their individual license.
268
 
269
- <!-- START-MAIN TABLE -->
270
- | Source | Description | N. Tokens | License |
271
- |:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
272
- | [lexdk] | Permissible use articles from [lex.dk](https://lex.dk) | 5.69M | [CC-BY-SA 4.0] |
273
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.60M | [CC-0] |
274
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
275
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
276
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
277
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
278
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
279
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
280
- | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
281
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
282
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
283
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
284
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
285
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
286
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
287
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
288
- | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
289
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
290
- | [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
291
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
292
- | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
293
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
294
- | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
295
- | **Total** | | 1.84B | |
296
-
297
- [lexdk]: data/lexdk/lexdk.md
298
- [opensubtitles]: data/opensubtitles/opensubtitles.md
299
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
300
- [ep]: data/ep/ep.md
301
- [ft]: data/ft/ft.md
302
- [wikisource]: data/wikisource/wikisource.md
303
- [spont]: data/spont/spont.md
304
- [tv2r]: data/tv2r/tv2r.md
305
- [adl]: data/adl/adl.md
306
- [hest]: data/hest/hest.md
307
- [skat]: data/skat/skat.md
308
- [dannet]: data/dannet/dannet.md
309
- [retspraksis]: data/retspraksis/retspraksis.md
310
- [wikibooks]: data/wikibooks/wikibooks.md
311
- [jvj]: data/jvj/jvj.md
312
- [gutenberg]: data/gutenberg/gutenberg.md
313
- [botxt]: data/botxt/botxt.md
314
- [depbank]: data/depbank/depbank.md
315
- [naat]: data/naat/naat.md
316
- [synne]: data/synne/synne.md
317
- [wiki]: data/wiki/wiki.md
318
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
319
- [relig]: data/relig/relig.md
320
-
321
-
322
- [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
323
- [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
324
- [Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
325
- [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
326
- [Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
327
- <!-- END-MAIN TABLE -->
328
-
329
-
330
- You can learn more about each dataset by pressing
331
-
332
- <!-- ### Quality Control
333
-
334
- Dynaword performs quality checks along with each PR. These quality checks includes:
335
- - ensuring unique ids
336
- TODO:
337
- - checking for duplicates
338
- -->
339
-
340
-
341
-
342
- ### Dataset Statistics
343
-
344
- <!-- START-DATASET PLOTS -->
345
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
346
- <img>
347
- <!-- END-DATASET PLOTS -->
348
 
349
 
350
  ## Additional Information
@@ -356,27 +282,3 @@ We welcome contributions to the dataset such as new sources, better data filteri
356
  ### Citation Information
357
 
358
  This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
359
-
360
- ### Disclaimer
361
- We do not own any of the text from which the data has been extracted.
362
- We only offer files that we believe we are free to redistribute. If any doubt occurs about the legality of any of our file downloads we will take them off right away after [contacting us](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
363
-
364
- ### Notice and take down policy
365
- Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
366
-
367
- - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
368
- - Clearly identify the copyrighted work claimed to be infringed.
369
- - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
370
-
371
- You can contact us through [this channel](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
372
-
373
- Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
374
-
375
- ---
376
-
377
- <h3 style="display: flex; align-items: center;">
378
- <a href="https://www.foundationmodels.dk">
379
- <img src="./docs/icon.png" width="30" style="margin-right: 10px;" />
380
- </a>
381
- A&nbsp;<a href=https://www.foundationmodels.dk>Danish Foundation Models</a>&nbsp;dataset
382
- </h3>
 
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
 
 
 
 
 
 
 
 
8
  - config_name: retsinformationdk
9
  data_files:
10
  - split: train
 
112
 
113
  <!--
114
  readme structure is inspired by:
115
+ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
 
 
116
 
117
  # 🧨 Danish Dynaword
118
 
119
+ | | |
120
+ | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
121
+ | **Language** | dan, dansk, Danish |
122
+ | **License** | Permissible, See the respective dataset |
123
+ | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
124
+ | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions) |
125
 
126
 
127
  ## Table of Contents
 
139
  - [Curation Rationale](#curation-rationale)
140
  - [Annotations](#annotations)
141
  - [Source Data](#source-data)
 
142
  - [Additional Information](#additional-information)
143
  - [Contributing to the dataset](#contributing-to-the-dataset)
144
  - [Citation Information](#citation-information)
 
 
145
 
146
  ## Dataset Description
147
 
 
 
 
 
 
 
 
148
 
149
  ### Dataset Summary
150
 
 
197
 
198
  Each entry in the dataset consists of a single text with associated metadata
199
 
 
200
  ```py
201
  {
202
+ "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...",
203
+ "source": "adl",
204
+ "id": "adl_aakjaer06val",
205
+ "added": "2020-09-14",
206
+ "created": "1700-01-01, 2022-01-01",
207
+ "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
208
+ "domain": "Wiki & Books",
209
+ "metadata": {"source-pretty": "Archive for Danish Literature"},
 
 
210
  }
211
  ```
212
 
 
223
  - `domain` (`str`): The domain of the source
224
  - `metadata/source-pretty` (`str`): The long form version of the short-form source name
225
  - `metadata/*`: Potentially additional metadata
226
+
227
 
228
  ### Data Splits
229
 
 
243
 
244
  Below follows a brief overview of the sources in the corpus along with their individual license.
245
 
246
+ | Source | License |
247
+ | ----------------- | -------------------------------------------------------- |
248
+ | adl | Creative Commons Legal Code 1.0 Universal |
249
+ | botxt | Creative Commons Legal Code 1.0 Universal |
250
+ | dannet | [dannet license] |
251
+ | depbank | Attribution-ShareAlike 4.0 International |
252
+ | ep | Creative Commons Legal Code 1.0 Universal |
253
+ | ft | Creative Commons Legal Code 1.0 Universal |
254
+ | gutenberg | [gutenberg license] |
255
+ | hest | Creative Commons Legal Code 1.0 Universal |
256
+ | jvj | Attribution-ShareAlike 4.0 International |
257
+ | naat | Creative Commons Legal Code 1.0 Universal |
258
+ | relig | Creative Commons Legal Code 1.0 Universal |
259
+ | retsinformationdk | [Other (Danish Law)] |
260
+ | retspraksis | Creative Commons Legal Code 1.0 Universal |
261
+ | skat | Creative Commons Legal Code 1.0 Universal |
262
+ | spont | Creative Commons Legal Code 1.0 Universal |
263
+ | synne | Creative Commons Legal Code 1.0 Universal |
264
+ | tv2r | [Custom, Creative Commons Attribution 4.0 International] |
265
+ | wiki | Creative Commons Legal Code 1.0 Universal |
266
+ | wikibooks | Creative Commons Legal Code 1.0 Universal |
267
+ | wikisource | Creative Commons Legal Code 1.0 Universal |
268
+
269
+ [Custom, Creative Commons Attribution 4.0 International]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/tv2r/tv2r.md#license-information
270
+ [gutenberg license]: https://www.gutenberg.org/policy/license.html
271
+ [dannet license]: https://cst.ku.dk/projekter/dannet/license.txt
272
+ [Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information
273
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
274
 
275
 
276
  ## Additional Information
 
282
  ### Citation Information
283
 
284
  This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/adl/adl.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Archive for Danish Literature
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,89 +11,47 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Archive for Danish Literature
19
-
20
  ## Dataset Description
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- Danish literature from 1700-2023 from the Archive for Danish Literature (ADL).
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
- See also dataset [entry](https://sprogteknologi.dk/dataset/public-adl-text-sources) on sprogteknologi.dk and their API [here](https://rawgit.com/Det-Kongelige-Bibliotek/access-digital-objects/master/form-demos/adl-form.html).
27
-
28
- <!-- START-DESC-STATS -->
29
- - **Language**: dan, dansk, Danish
30
- - **Number of samples**: 498
31
- - **Number of tokens (Llama 3)**: 58.49M
32
- - **Average document length (characters)**: 324932.24
33
- <!-- END-DESC-STATS -->
34
-
35
-
36
-
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
-
40
-
41
- <!-- START-SAMPLE -->
42
- ```py
43
  {
44
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
45
- "source": "adl",
46
- "id": "adl_aakjaer06val",
47
- "added": "2020-09-14",
48
- "created": "1700-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Wiki & Books",
51
- "metadata": {
52
- "source-pretty": "Archive for Danish Literature"
53
- }
 
 
 
 
54
  }
55
  ```
56
 
57
- ### Data Fields
58
-
59
- An entry in the dataset consists of the following fields:
60
 
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
- <!-- END-SAMPLE -->
71
 
 
 
 
 
 
72
 
73
-
74
- ### Dataset Statistics
75
-
76
- <!-- START-DATASET PLOTS -->
77
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
78
- <img>
79
- <!-- END-DATASET PLOTS -->
80
-
81
-
82
- ## Additional Information
83
-
84
-
85
- ### Citation Information
86
-
87
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
88
-
89
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
90
-
91
- ```bash
92
- @inproceedings{dagw,
93
- title = {{The Danish Gigaword Corpus}},
94
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
95
- year = 2021,
96
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
97
- publisher = {NEALT}
98
- }
99
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Archive for Danish Literature
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 498
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'SAMLEDE VÆRKER
24
+
25
+ JEPPE AAKJÆR GYLDENDALSKE BOGHANDE',
26
+ 'source': 'adl',
27
+ 'id': 'adl_aakjaer06val',
28
+ 'added': '2020-09-14',
29
+ 'created': '1700-01-01, 2022-01-01',
30
+ 'metadata': {
31
+ 'domain': 'Wiki & Books',
32
+ 'license': 'Creative Commons Legal Code
33
+
34
+ CC0 1.0 Universal',
35
+ 'source-pretty': ' Archive for Danish Literature'
36
+ }
37
  }
38
  ```
39
 
40
+ ## Data Fields
 
 
41
 
42
+ - **id**: source-specific identifier.
43
+ - **text**: textual content of the document.
44
+ - **source**: source of the data.
45
+ - **added**: timestamp ai2 acquired this data.
46
+ - **created**": timestamp when original document was created (best-guess if not available)
47
+ - **metadata**: source-specific metadata.
 
 
 
 
48
 
49
+ ## License Information
50
+ <details>
51
+ <summary>Creative Commons Zero v1.0 Universal</summary>
52
+ <p>
53
+ Creative Commons Legal Code
54
 
55
+ CC0 1.0 Universal
56
+ </p>
57
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/adl/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 498, "average_document_length": 324932.2429718876, "number_of_tokens": 58493311, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/adl/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 297677d067d7831f90c4d539c1d160af2087a25119691bbfda61e95de62ca5f5
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/botxt/botxt.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: Bornholmsk
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,88 +11,47 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Bornholmsk
19
-
20
  ## Dataset Description
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- The Bornholmsk Ordbog Dictionary Project
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
- Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
27
-
28
-
29
- <!-- START-DESC-STATS -->
30
- - **Language**: dan, dansk, Danish
31
- - **Number of samples**: 106
32
- - **Number of tokens (Llama 3)**: 847.97K
33
- - **Average document length (characters)**: 18972.42
34
- <!-- END-DESC-STATS -->
35
-
36
-
37
-
38
- ## Dataset Structure
39
  An example from the dataset looks as follows.
40
-
41
-
42
- <!-- START-SAMPLE -->
43
- ```py
44
  {
45
- "text": "Ræua-Lârs\n\nRæua-Lârs å hans Konna, Stina, bode uda i Torpabakkana. Hanj hed nok æjla Lârs\nNielsen, m[...]",
46
- "source": "botxt",
47
- "id": "botxt_0000040",
48
- "added": "2024-05-16",
49
- "created": "2000-01-01, 2022-01-01",
50
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
51
- "domain": "Other",
52
- "metadata": {
53
- "source-pretty": "Bornholmsk (Danish dialect)"
54
- }
 
 
 
 
55
  }
56
  ```
57
 
58
- ### Data Fields
59
 
60
- An entry in the dataset consists of the following fields:
 
 
 
 
 
61
 
62
- - `text`(`str`): The content of the document.
63
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
64
- - `id` (`str`): An unique identifier for each document.
65
- - `added` (`str`): An date for when the document was added to this collection.
66
- - `created` (`str`): An date range for when the document was originally created.
67
- - `license` (`str`): The license of the document. The licenses vary according to the source.
68
- - `domain` (`str`): The domain of the source
69
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
70
- - `metadata/*`: Potentially additional metadata
71
- <!-- END-SAMPLE -->
72
 
73
- ### Dataset Statistics
74
-
75
- <!-- START-DATASET PLOTS -->
76
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
77
- <img>
78
- <!-- END-DATASET PLOTS -->
79
-
80
-
81
- ## Additional Information
82
-
83
-
84
- ### Citation Information
85
-
86
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
87
-
88
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
89
-
90
- ```bash
91
- @inproceedings{dagw,
92
- title = {{The Danish Gigaword Corpus}},
93
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
94
- year = 2021,
95
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
96
- publisher = {NEALT}
97
- }
98
- ```
 
1
  ---
2
+ pretty_name: Bornholmsk (Danish dialect)
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Bornholmsk (Danish dialect)
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 106
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Ræua-Lârs
24
+
25
+ Ræua-Lârs å hans Konna, Stina, bode uda',
26
+ 'source': 'botxt',
27
+ 'id': 'botxt_0000040',
28
+ 'added': '2024-05-16',
29
+ 'created': '2000-01-01, 2022-01-01',
30
+ 'metadata': {
31
+ 'domain': 'Other',
32
+ 'license': 'Creative Commons Legal Code
33
+
34
+ CC0 1.0 Universal',
35
+ 'source-pretty': 'Bornholmsk (Danish dialect)'
36
+ }
37
  }
38
  ```
39
 
40
+ ## Data Fields
41
 
42
+ - **id**: source-specific identifier.
43
+ - **text**: textual content of the document.
44
+ - **source**: source of the data.
45
+ - **added**: timestamp ai2 acquired this data.
46
+ - **created**": timestamp when original document was created (best-guess if not available)
47
+ - **metadata**: source-specific metadata.
48
 
49
+ ## License Information
50
+ <details>
51
+ <summary>Creative Commons Zero v1.0 Universal</summary>
52
+ <p>
53
+ Creative Commons Legal Code
 
 
 
 
 
54
 
55
+ CC0 1.0 Universal
56
+ </p>
57
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/botxt/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 106, "average_document_length": 18972.415094339623, "number_of_tokens": 847973, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/botxt/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e98f2f59f8cbe8be5691f1d7c073b2c13361d331546f9451d24b27fcde649f6c
  • Pointer size: 131 Bytes
  • Size of remote file: 541 kB
data/dannet/dannet.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- pretty_name: DanNet
3
  language:
4
  - da
5
- license: other
6
  license_name: DanNet 1.0 License
7
  size_categories:
8
  - 10k-100k
@@ -11,68 +11,74 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for DanNet
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- A WordNet is a lexico-semantic network which show the meaning and the relation between words through named connections. It can be considered a machine-readable dictionary.
26
-
27
-
28
  ## Dataset Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
 
 
 
30
 
31
- <!-- START-DESC-STATS -->
32
- - **Language**: dan, dansk, Danish
33
- - **Number of samples**: 49.04K
34
- - **Number of tokens (Llama 3)**: 1.52M
35
- - **Average document length (characters)**: 90.80
36
- <!-- END-DESC-STATS -->
37
 
 
38
 
 
 
 
 
 
39
 
40
- ## Dataset Structure
41
- An example from the dataset looks as follows.
 
 
 
 
42
 
 
 
 
 
 
 
 
43
 
44
- <!-- START-SAMPLE -->
45
- ```py
46
- {
47
- "text": "Når fodboldholdet fra 1. division i Ikast spiller hjemmekampe, lyder råbet ud over Ikast Stadion: We[...]",
48
- "source": "dannet",
49
- "id": "dannet_46506",
50
- "added": "2020-09-24",
51
- "created": "2000-01-01, 2022-01-01",
52
- "license": "Commercial Use of DanNet\n\nDanNet may be used in commercial applications in accordance with the follo[...]",
53
- "domain": "dannet",
54
- "metadata": {
55
- "source-pretty": "DanNet (Danish WordNet)"
56
- }
57
  }
58
  ```
59
 
60
- ### Data Fields
61
-
62
- An entry in the dataset consists of the following fields:
63
-
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `id` (`str`): An unique identifier for each document.
67
- - `added` (`str`): An date for when the document was added to this collection.
68
- - `created` (`str`): An date range for when the document was originally created.
69
- - `license` (`str`): The license of the document. The licenses vary according to the source.
70
- - `domain` (`str`): The domain of the source
71
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
72
- - `metadata/*`: Potentially additional metadata
73
- <!-- END-SAMPLE -->
74
-
75
 
 
 
 
 
 
 
76
 
77
  ## License Information
78
  <details>
@@ -119,31 +125,3 @@ LICENSEE agrees to preserve same.
119
  DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
120
  </p>
121
  </details>
122
-
123
-
124
- ### Dataset Statistics
125
-
126
- <!-- START-DATASET PLOTS -->
127
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
128
- <img>
129
- <!-- END-DATASET PLOTS -->
130
-
131
-
132
- ## Additional Information
133
-
134
-
135
- ### Citation Information
136
-
137
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
138
-
139
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
140
-
141
- ```bash
142
- @inproceedings{dagw,
143
- title = {{The Danish Gigaword Corpus}},
144
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
145
- year = 2021,
146
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
147
- publisher = {NEALT}
148
- }
149
- ```
 
1
  ---
2
+ pretty_name: DanNet (Danish WordNet)
3
  language:
4
  - da
5
+ license: DanNet 1.0 License
6
  license_name: DanNet 1.0 License
7
  size_categories:
8
  - 10k-100k
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for DanNet (Danish WordNet)
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 49040
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
20
+ An example from the dataset looks as follows.
21
+ ```yaml
22
+ {
23
+ 'text': 'Når fodboldholdet fra 1. division i Ikast spiller ',
24
+ 'source': 'dannet',
25
+ 'id': 'dannet_46506',
26
+ 'added': '2020-09-24',
27
+ 'created': '2000-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'dannet',
30
+ 'license': 'Commercial Use of DanNet
31
 
32
+ DanNet may be used in commercial applications in accordance with the following
33
+ license agreement. An attorney representing the commercial interest should
34
+ review this DanNet license with respect to the intended use.
35
 
36
+ DanNet 1.0 License
 
 
 
 
 
37
 
38
+ DanNet Release 2.1
39
 
40
+ This software and database is being provided to you, the LICENSEE, by University
41
+ of Copenhagen and Society for Danish Language and Literature under the following
42
+ license. By obtaining, using and/or copying this software and database, you
43
+ agree that you have read, understood, and will comply with these terms and
44
+ conditions.
45
 
46
+ Permission to use, copy, modify and distribute this software and database and
47
+ its documentation for any purpose and without fee or royalty is hereby granted,
48
+ provided that you agree to comply with the following copyright notice and
49
+ statements, including the disclaimer, and that the same appear on ALL copies of
50
+ the software, database and documentation, including modifications that you make
51
+ for internal use or for distribution.
52
 
53
+ THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND UNIVERSITY OF COPENHAGEN and
54
+ SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO REPRESENTATIONS OR
55
+ WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION,
56
+ UNIVERSITY OF COPENHAGEN AND SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO
57
+ REPRESENTATIONS OR WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR
58
+ PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL
59
+ NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
60
 
61
+ The names of University of Copenhagen and Society for Danish Language and
62
+ Literature may not be used in advertising or publicity pertaining to
63
+ distribution of the software and/or database. Title to copyright in this
64
+ software, database and any associated documentation shall at all times remain
65
+ with University of Copenhagen and Society for Danish Language and Literature and
66
+ LICENSEE agrees to preserve same.
67
+
68
+ DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish',
69
+ 'source-pretty': 'DanNet (Danish WordNet)'
70
+ }
 
 
 
71
  }
72
  ```
73
 
74
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
+ - **id**: source-specific identifier.
77
+ - **text**: textual content of the document.
78
+ - **source**: source of the data.
79
+ - **added**: timestamp ai2 acquired this data.
80
+ - **created**": timestamp when original document was created (best-guess if not available)
81
+ - **metadata**: source-specific metadata.
82
 
83
  ## License Information
84
  <details>
 
125
  DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
126
  </p>
127
  </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/dannet/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 49040, "average_document_length": 90.80340538336053, "number_of_tokens": 1523416, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/dannet/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 91dca32a1fd83b3699bb8ebae083dc697a0dac4b703ada720381448216ea0117
  • Pointer size: 131 Bytes
  • Size of remote file: 538 kB
data/depbank/depbank.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Danish Dependency Treebank
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,93 +11,41 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for Danish Dependency Treebank
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The Danish UD treebank has been converted from the Danish Dependency Treebank (Buch-Kromman, 2003) into Universal Dependencies (UD). It consists of 5,512 sentences (100k words). The Danish source texts and the Danish part-of-speech tags were created by the PAROLE-DK project (Keson 1998) by the Danish Society for Language and Literature.
26
-
27
- While the dataset was initially intended as a rich annotation, this corpora only uses the raw text.
28
-
29
  ## Dataset Description
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 536
35
- - **Number of tokens (Llama 3)**: 185.45K
36
- - **Average document length (characters)**: 1018.90
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "\nH.L. Hansen var en usædvanmlig og frodig personlighed. Han skabte \nglæde og munterhed omkring sig o[...]",
49
- "source": "depbank",
50
- "id": "depbank_0375",
51
- "added": "2024-05-16",
52
- "created": "2000-01-01, 2022-01-01",
53
- "license": "Attribution-ShareAlike 4.0 International",
54
- "domain": "Other",
55
- "metadata": {
56
- "source-pretty": "Danish Dependency Treebank"
57
- }
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
-
86
- ## Additional Information
87
-
88
-
89
- ### Citation Information
90
-
91
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
92
-
93
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
94
-
95
- ```bash
96
- @inproceedings{dagw,
97
- title = {{The Danish Gigaword Corpus}},
98
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
99
- year = 2021,
100
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
101
- publisher = {NEALT}
102
- }
103
- ```
 
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Danish Dependency Treebank
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 536
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'H.L. Hansen var en usædvanmlig og frodig personlig',
24
+ 'source': 'depbank',
25
+ 'id': 'depbank_0375',
26
+ 'added': '2024-05-16',
27
+ 'created': '2000-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Other',
30
+ 'license': 'Attribution-ShareAlike 4.0 International',
31
+ 'source-pretty': 'Danish Dependency Treebank'
32
+ }
33
  }
34
  ```
35
 
36
+ ## Data Fields
37
+
38
+ - **id**: source-specific identifier.
39
+ - **text**: textual content of the document.
40
+ - **source**: source of the data.
41
+ - **added**: timestamp ai2 acquired this data.
42
+ - **created**": timestamp when original document was created (best-guess if not available)
43
+ - **metadata**: source-specific metadata.
44
+
45
+ ## License Information
46
+ <details>
47
+ <summary>Creative Commons Attribution Share Alike 4.0</summary>
48
+ <p>
49
+ Attribution-ShareAlike 4.0 International
50
+ </p>
51
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/depbank/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 536, "average_document_length": 1018.8992537313433, "number_of_tokens": 185454, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/depbank/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: b23e81411e3f3b86bbd3990cf2e59f4a08f7dae10b908cf3101487069c0296bc
  • Pointer size: 131 Bytes
  • Size of remote file: 547 kB
data/ep/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 4213, "average_document_length": 74063.40469973891, "number_of_tokens": 100888932, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/ep/ep.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: European Parliament
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,91 +11,45 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for European Parliament
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The europarl is a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web. This corpus has found widespread use in the NLP community. It was initially intended as training data for statistical machine translation.
26
-
27
-
28
  ## Dataset Description
29
-
30
-
31
- <!-- START-DESC-STATS -->
32
- - **Language**: dan, dansk, Danish
33
- - **Number of samples**: 4.21K
34
- - **Number of tokens (Llama 3)**: 100.89M
35
- - **Average document length (characters)**: 74063.40
36
- <!-- END-DESC-STATS -->
37
-
38
-
39
-
40
- ## Dataset Structure
41
  An example from the dataset looks as follows.
42
-
43
-
44
- <!-- START-SAMPLE -->
45
- ```py
46
  {
47
- "text": "TALER 6703: Jeg har stemt for henstillingen om godkendelse af opdelingsanordninger til beskyttelse a[...]",
48
- "source": "ep",
49
- "id": "ep_07-02-01-008",
50
- "added": "2019-11-20",
51
- "created": "2004-01-01, 2009-01-01",
52
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
53
- "domain": "Conversation",
54
- "metadata": {
55
- "source-pretty": "European Parliament"
56
- }
 
 
57
  }
58
  ```
59
 
60
- ### Data Fields
61
-
62
- An entry in the dataset consists of the following fields:
63
-
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `id` (`str`): An unique identifier for each document.
67
- - `added` (`str`): An date for when the document was added to this collection.
68
- - `created` (`str`): An date range for when the document was originally created.
69
- - `license` (`str`): The license of the document. The licenses vary according to the source.
70
- - `domain` (`str`): The domain of the source
71
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
72
- - `metadata/*`: Potentially additional metadata
73
- <!-- END-SAMPLE -->
74
-
75
- ### Dataset Statistics
76
-
77
- <!-- START-DATASET PLOTS -->
78
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
79
- <img>
80
- <!-- END-DATASET PLOTS -->
81
-
82
 
 
 
 
 
 
 
83
 
84
- ## Additional Information
 
 
 
 
85
 
86
-
87
- ### Citation Information
88
-
89
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
90
-
91
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
92
-
93
- ```bash
94
- @inproceedings{dagw,
95
- title = {{The Danish Gigaword Corpus}},
96
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
97
- year = 2021,
98
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
99
- publisher = {NEALT}
100
- }
101
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for European Parliament
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 4213
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'TALER 6703: Jeg har stemt for henstillingen om god',
24
+ 'source': 'ep',
25
+ 'id': 'ep_07-02-01-008',
26
+ 'added': '2019-11-20',
27
+ 'created': '2004-01-01, 2009-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Conversation',
30
+ 'license': 'Creative Commons Legal Code
31
+
32
+ CC0 1.0 Universal',
33
+ 'source-pretty': 'European Parliament'
34
+ }
35
  }
36
  ```
37
 
38
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ - **id**: source-specific identifier.
41
+ - **text**: textual content of the document.
42
+ - **source**: source of the data.
43
+ - **added**: timestamp ai2 acquired this data.
44
+ - **created**": timestamp when original document was created (best-guess if not available)
45
+ - **metadata**: source-specific metadata.
46
 
47
+ ## License Information
48
+ <details>
49
+ <summary>Creative Commons Zero v1.0 Universal</summary>
50
+ <p>
51
+ Creative Commons Legal Code
52
 
53
+ CC0 1.0 Universal
54
+ </p>
55
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
data/ep/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 8914d9fad81bcbc519c29b7c258a256d4eb7084ed8ff9c9100a93ad87fbb4171
  • Pointer size: 131 Bytes
  • Size of remote file: 545 kB
data/ft/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 1315, "average_document_length": 266745.19163498096, "number_of_tokens": 114087231, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/ft/ft.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: Folketinget
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,92 +11,45 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Folketinget
19
-
20
  ## Dataset Description
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- Records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
-
27
- All records have a transcript produced by commercial Automatic Speech Recognition (ASR) followed by postediting by linguists employed by Folketinget for intelligibility, i.e., edit out dysfluencies, restarts, repairs, and mistakes. The transcript is, therefore, not a representation of spoken Danish but rather information content.
28
-
29
- In the parliament hall, one speaker at a time addresses members of the parliament. Monologues may include rebuttals or other comments to statements in previous monologues. While speakers can read aloud from a prepared statement or speak extemporaneously, we expect no difference to be apparent in the data because of the post-editing. The Folketinget section covers parliament hall sessions between 2009 and 2019. It contains discussions on a wide range of topics, issues, and named entities relevant to Danish society.
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 1.31K
35
- - **Number of tokens (Llama 3)**: 114.09M
36
- - **Average document length (characters)**: 266745.19
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "TALER 50: Mødet er åbnet. I dag er der følgende anmeldelser: Ministeren for by, bolig og landdistrik[...]",
49
- "source": "ft",
50
- "id": "ft_20121M100",
51
- "added": "2021-03-28",
52
- "created": "2009-01-01, 2019-01-01",
53
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
54
- "domain": "Conversation",
55
- "metadata": {
56
- "source-pretty": "Folketinget (Danish Parliament)"
57
- }
 
 
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
 
 
 
 
 
 
 
84
 
85
- ## Additional Information
 
 
 
 
86
 
87
-
88
- ### Citation Information
89
-
90
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
91
-
92
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
93
-
94
- ```bash
95
- @inproceedings{dagw,
96
- title = {{The Danish Gigaword Corpus}},
97
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
98
- year = 2021,
99
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
100
- publisher = {NEALT}
101
- }
102
- ```
 
1
  ---
2
+ pretty_name: Folketinget (Danish Parliament)
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Folketinget (Danish Parliament)
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 1315
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'TALER 50: Mødet er åbnet. I dag er der følgende an',
24
+ 'source': 'ft',
25
+ 'id': 'ft_20121M100',
26
+ 'added': '2021-03-28',
27
+ 'created': '2009-01-01, 2019-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Conversation',
30
+ 'license': 'Creative Commons Legal Code
31
+
32
+ CC0 1.0 Universal',
33
+ 'source-pretty': 'Folketinget (Danish Parliament)'
34
+ }
35
  }
36
  ```
37
 
38
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ - **id**: source-specific identifier.
41
+ - **text**: textual content of the document.
42
+ - **source**: source of the data.
43
+ - **added**: timestamp ai2 acquired this data.
44
+ - **created**": timestamp when original document was created (best-guess if not available)
45
+ - **metadata**: source-specific metadata.
46
 
47
+ ## License Information
48
+ <details>
49
+ <summary>Creative Commons Zero v1.0 Universal</summary>
50
+ <p>
51
+ Creative Commons Legal Code
52
 
53
+ CC0 1.0 Universal
54
+ </p>
55
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
data/ft/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e16a1a9de4f1ef8fedd3e85035287a813d5980b25b40b09c54462671eaebcd81
  • Pointer size: 131 Bytes
  • Size of remote file: 550 kB
data/gutenberg/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 66, "average_document_length": 290147.9393939394, "number_of_tokens": 6763317, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/gutenberg/gutenberg.md CHANGED
@@ -2,7 +2,7 @@
2
  pretty_name: Gutenberg
3
  language:
4
  - da
5
- license: other
6
  license_name: Gutenberg License
7
  size_categories:
8
  - 1-10k
@@ -11,75 +11,365 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for Gutenberg
19
-
20
  ## Dataset Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- <!-- START-SHORT DESCRIPTION -->
23
- The Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
24
- <!-- END-SHORT DESCRIPTION -->
25
 
 
 
 
 
 
 
26
 
27
- Project Gutenberg is an online library of free eBooks. Project Gutenberg was the first provider of free electronic books, or eBooks.
28
 
 
 
29
 
30
- <!-- START-DESC-STATS -->
31
- - **Language**: dan, dansk, Danish
32
- - **Number of samples**: 66
33
- - **Number of tokens (Llama 3)**: 6.76M
34
- - **Average document length (characters)**: 290147.94
35
- <!-- END-DESC-STATS -->
 
 
 
 
36
 
 
 
 
 
 
 
 
 
 
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- ## Dataset Structure
40
- An example from the dataset looks as follows.
 
 
 
 
 
 
 
41
 
 
42
 
43
- <!-- START-SAMPLE -->
44
- ```py
45
- {
46
- "text": "Afskriverens bemærkninger: Åbenlyse trykfejl er rettet\ni denne e-bog, men forfatterens stavning er f[...]",
47
- "source": "gutenberg",
48
- "id": "gutenberg_43899",
49
- "added": "2020-09-12",
50
- "created": "1700-01-01, 2022-01-01",
51
- "license": "*** START: FULL LICENSE ***\n\nTHE FULL PROJECT GUTENBERG LICENSE\nPLEASE READ THIS BEFORE YOU DISTRIBU[...]",
52
- "domain": "Wiki & Books",
53
- "metadata": {
54
- "source-pretty": "Gutenberg"
55
- }
56
- }
57
- ```
58
 
59
- ### Data Fields
 
 
 
60
 
61
- An entry in the dataset consists of the following fields:
 
 
 
 
 
 
 
 
 
62
 
63
- - `text`(`str`): The content of the document.
64
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
65
- - `id` (`str`): An unique identifier for each document.
66
- - `added` (`str`): An date for when the document was added to this collection.
67
- - `created` (`str`): An date range for when the document was originally created.
68
- - `license` (`str`): The license of the document. The licenses vary according to the source.
69
- - `domain` (`str`): The domain of the source
70
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
71
- - `metadata/*`: Potentially additional metadata
72
- <!-- END-SAMPLE -->
73
 
 
 
 
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
- ## License Information
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  <details>
79
  <summary>Gutenberg License</summary>
80
  <p>
81
-
82
- ```
83
  *** START: FULL LICENSE ***
84
 
85
  THE FULL PROJECT GUTENBERG LICENSE
@@ -404,56 +694,6 @@ This Web site includes information about Project Gutenberg-tm,
404
  including how to make donations to the Project Gutenberg Literary
405
  Archive Foundation, how to help produce our new eBooks, and how to
406
  subscribe to our email newsletter to hear about new eBooks.
407
- ```
408
 
409
  </p>
410
  </details>
411
-
412
-
413
- ### Dataset Statistics
414
-
415
- <!-- START-DATASET PLOTS -->
416
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
417
- <img>
418
- <!-- END-DATASET PLOTS -->
419
-
420
-
421
-
422
- ## Additional Information
423
-
424
-
425
- ### Citation Information
426
-
427
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
428
-
429
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
430
-
431
- ```bash
432
- @inproceedings{dagw,
433
- title = {{The Danish Gigaword Corpus}},
434
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
435
- year = 2021,
436
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
437
- publisher = {NEALT}
438
- }
439
- ```
440
-
441
-
442
- ## Additional Information
443
-
444
-
445
- ### Citation Information
446
-
447
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
448
-
449
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
450
-
451
- ```bash
452
- @inproceedings{dagw,
453
- title = {{The Danish Gigaword Corpus}},
454
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
455
- year = 2021,
456
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
457
- publisher = {NEALT}
458
- }
459
- ```
 
2
  pretty_name: Gutenberg
3
  language:
4
  - da
5
+ license: Gutenberg License
6
  license_name: Gutenberg License
7
  size_categories:
8
  - 1-10k
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Gutenberg
 
16
  ## Dataset Description
17
+ - **Number of records:** 66
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
20
+ An example from the dataset looks as follows.
21
+ ```yaml
22
+ {
23
+ 'text': 'Afskriverens bemærkninger: Åbenlyse trykfejl er re',
24
+ 'source': 'gutenberg',
25
+ 'id': 'gutenberg_43899',
26
+ 'added': '2020-09-12',
27
+ 'created': '1700-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Wiki & Books',
30
+ 'license': '*** START: FULL LICENSE ***
31
 
32
+ THE FULL PROJECT GUTENBERG LICENSE
33
+ PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
 
34
 
35
+ To protect the Project Gutenberg-tm mission of promoting the free
36
+ distribution of electronic works, by using or distributing this work
37
+ (or any other work associated in any way with the phrase "Project
38
+ Gutenberg"), you agree to comply with all the terms of the Full Project
39
+ Gutenberg-tm License available with this file or online at
40
+ www.gutenberg.org/license.
41
 
 
42
 
43
+ Section 1. General Terms of Use and Redistributing Project Gutenberg-tm
44
+ electronic works
45
 
46
+ 1.A. By reading or using any part of this Project Gutenberg-tm
47
+ electronic work, you indicate that you have read, understand, agree to
48
+ and accept all the terms of this license and intellectual property
49
+ (trademark/copyright) agreement. If you do not agree to abide by all
50
+ the terms of this agreement, you must cease using and return or destroy
51
+ all copies of Project Gutenberg-tm electronic works in your possession.
52
+ If you paid a fee for obtaining a copy of or access to a Project
53
+ Gutenberg-tm electronic work and you do not agree to be bound by the
54
+ terms of this agreement, you may obtain a refund from the person or
55
+ entity to whom you paid the fee as set forth in paragraph 1.E.8.
56
 
57
+ 1.B. "Project Gutenberg" is a registered trademark. It may only be
58
+ used on or associated in any way with an electronic work by people who
59
+ agree to be bound by the terms of this agreement. There are a few
60
+ things that you can do with most Project Gutenberg-tm electronic works
61
+ even without complying with the full terms of this agreement. See
62
+ paragraph 1.C below. There are a lot of things you can do with Project
63
+ Gutenberg-tm electronic works if you follow the terms of this agreement
64
+ and help preserve free future access to Project Gutenberg-tm electronic
65
+ works. See paragraph 1.E below.
66
 
67
+ 1.C. The Project Gutenberg Literary Archive Foundation ("the Foundation"
68
+ or PGLAF), owns a compilation copyright in the collection of Project
69
+ Gutenberg-tm electronic works. Nearly all the individual works in the
70
+ collection are in the public domain in the United States. If an
71
+ individual work is in the public domain in the United States and you are
72
+ located in the United States, we do not claim a right to prevent you from
73
+ copying, distributing, performing, displaying or creating derivative
74
+ works based on the work as long as all references to Project Gutenberg
75
+ are removed. Of course, we hope that you will support the Project
76
+ Gutenberg-tm mission of promoting free access to electronic works by
77
+ freely sharing Project Gutenberg-tm works in compliance with the terms of
78
+ this agreement for keeping the Project Gutenberg-tm name associated with
79
+ the work. You can easily comply with the terms of this agreement by
80
+ keeping this work in the same format with its attached full Project
81
+ Gutenberg-tm License when you share it without charge with others.
82
 
83
+ 1.D. The copyright laws of the place where you are located also govern
84
+ what you can do with this work. Copyright laws in most countries are in
85
+ a constant state of change. If you are outside the United States, check
86
+ the laws of your country in addition to the terms of this agreement
87
+ before downloading, copying, displaying, performing, distributing or
88
+ creating derivative works based on this work or any other Project
89
+ Gutenberg-tm work. The Foundation makes no representations concerning
90
+ the copyright status of any work in any country outside the United
91
+ States.
92
 
93
+ 1.E. Unless you have removed all references to Project Gutenberg:
94
 
95
+ 1.E.1. The following sentence, with active links to, or other immediate
96
+ access to, the full Project Gutenberg-tm License must appear prominently
97
+ whenever any copy of a Project Gutenberg-tm work (any work on which the
98
+ phrase "Project Gutenberg" appears, or with which the phrase "Project
99
+ Gutenberg" is associated) is accessed, displayed, performed, viewed,
100
+ copied or distributed:
 
 
 
 
 
 
 
 
 
101
 
102
+ This eBook is for the use of anyone anywhere at no cost and with
103
+ almost no restrictions whatsoever. You may copy it, give it away or
104
+ re-use it under the terms of the Project Gutenberg License included
105
+ with this eBook or online at www.gutenberg.org
106
 
107
+ 1.E.2. If an individual Project Gutenberg-tm electronic work is derived
108
+ from the public domain (does not contain a notice indicating that it is
109
+ posted with permission of the copyright holder), the work can be copied
110
+ and distributed to anyone in the United States without paying any fees
111
+ or charges. If you are redistributing or providing access to a work
112
+ with the phrase "Project Gutenberg" associated with or appearing on the
113
+ work, you must comply either with the requirements of paragraphs 1.E.1
114
+ through 1.E.7 or obtain permission for the use of the work and the
115
+ Project Gutenberg-tm trademark as set forth in paragraphs 1.E.8 or
116
+ 1.E.9.
117
 
118
+ 1.E.3. If an individual Project Gutenberg-tm electronic work is posted
119
+ with the permission of the copyright holder, your use and distribution
120
+ must comply with both paragraphs 1.E.1 through 1.E.7 and any additional
121
+ terms imposed by the copyright holder. Additional terms will be linked
122
+ to the Project Gutenberg-tm License for all works posted with the
123
+ permission of the copyright holder found at the beginning of this work.
 
 
 
 
124
 
125
+ 1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
126
+ License terms from this work, or any files containing a part of this
127
+ work or any other work associated with Project Gutenberg-tm.
128
 
129
+ 1.E.5. Do not copy, display, perform, distribute or redistribute this
130
+ electronic work, or any part of this electronic work, without
131
+ prominently displaying the sentence set forth in paragraph 1.E.1 with
132
+ active links or immediate access to the full terms of the Project
133
+ Gutenberg-tm License.
134
+
135
+ 1.E.6. You may convert to and distribute this work in any binary,
136
+ compressed, marked up, nonproprietary or proprietary form, including any
137
+ word processing or hypertext form. However, if you provide access to or
138
+ distribute copies of a Project Gutenberg-tm work in a format other than
139
+ "Plain Vanilla ASCII" or other format used in the official version
140
+ posted on the official Project Gutenberg-tm web site (www.gutenberg.org),
141
+ you must, at no additional cost, fee or expense to the user, provide a
142
+ copy, a means of exporting a copy, or a means of obtaining a copy upon
143
+ request, of the work in its original "Plain Vanilla ASCII" or other
144
+ form. Any alternate format must include the full Project Gutenberg-tm
145
+ License as specified in paragraph 1.E.1.
146
+
147
+ 1.E.7. Do not charge a fee for access to, viewing, displaying,
148
+ performing, copying or distributing any Project Gutenberg-tm works
149
+ unless you comply with paragraph 1.E.8 or 1.E.9.
150
+
151
+ 1.E.8. You may charge a reasonable fee for copies of or providing
152
+ access to or distributing Project Gutenberg-tm electronic works provided
153
+ that
154
+
155
+ - You pay a royalty fee of 20% of the gross profits you derive from
156
+ the use of Project Gutenberg-tm works calculated using the method
157
+ you already use to calculate your applicable taxes. The fee is
158
+ owed to the owner of the Project Gutenberg-tm trademark, but he
159
+ has agreed to donate royalties under this paragraph to the
160
+ Project Gutenberg Literary Archive Foundation. Royalty payments
161
+ must be paid within 60 days following each date on which you
162
+ prepare (or are legally required to prepare) your periodic tax
163
+ returns. Royalty payments should be clearly marked as such and
164
+ sent to the Project Gutenberg Literary Archive Foundation at the
165
+ address specified in Section 4, "Information about donations to
166
+ the Project Gutenberg Literary Archive Foundation."
167
+
168
+ - You provide a full refund of any money paid by a user who notifies
169
+ you in writing (or by e-mail) within 30 days of receipt that s/he
170
+ does not agree to the terms of the full Project Gutenberg-tm
171
+ License. You must require such a user to return or
172
+ destroy all copies of the works possessed in a physical medium
173
+ and discontinue all use of and all access to other copies of
174
+ Project Gutenberg-tm works.
175
+
176
+ - You provide, in accordance with paragraph 1.F.3, a full refund of any
177
+ money paid for a work or a replacement copy, if a defect in the
178
+ electronic work is discovered and reported to you within 90 days
179
+ of receipt of the work.
180
+
181
+ - You comply with all other terms of this agreement for free
182
+ distribution of Project Gutenberg-tm works.
183
+
184
+ 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg-tm
185
+ electronic work or group of works on different terms than are set
186
+ forth in this agreement, you must obtain permission in writing from
187
+ both the Project Gutenberg Literary Archive Foundation and Michael
188
+ Hart, the owner of the Project Gutenberg-tm trademark. Contact the
189
+ Foundation as set forth in Section 3 below.
190
+
191
+ 1.F.
192
+
193
+ 1.F.1. Project Gutenberg volunteers and employees expend considerable
194
+ effort to identify, do copyright research on, transcribe and proofread
195
+ public domain works in creating the Project Gutenberg-tm
196
+ collection. Despite these efforts, Project Gutenberg-tm electronic
197
+ works, and the medium on which they may be stored, may contain
198
+ "Defects," such as, but not limited to, incomplete, inaccurate or
199
+ corrupt data, transcription errors, a copyright or other intellectual
200
+ property infringement, a defective or damaged disk or other medium, a
201
+ computer virus, or computer codes that damage or cannot be read by
202
+ your equipment.
203
+
204
+ 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right
205
+ of Replacement or Refund" described in paragraph 1.F.3, the Project
206
+ Gutenberg Literary Archive Foundation, the owner of the Project
207
+ Gutenberg-tm trademark, and any other party distributing a Project
208
+ Gutenberg-tm electronic work under this agreement, disclaim all
209
+ liability to you for damages, costs and expenses, including legal
210
+ fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
211
+ LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
212
+ PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE
213
+ TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE
214
+ LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR
215
+ INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH
216
+ DAMAGE.
217
+
218
+ 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a
219
+ defect in this electronic work within 90 days of receiving it, you can
220
+ receive a refund of the money (if any) you paid for it by sending a
221
+ written explanation to the person you received the work from. If you
222
+ received the work on a physical medium, you must return the medium with
223
+ your written explanation. The person or entity that provided you with
224
+ the defective work may elect to provide a replacement copy in lieu of a
225
+ refund. If you received the work electronically, the person or entity
226
+ providing it to you may choose to give you a second opportunity to
227
+ receive the work electronically in lieu of a refund. If the second copy
228
+ is also defective, you may demand a refund in writing without further
229
+ opportunities to fix the problem.
230
+
231
+ 1.F.4. Except for the limited right of replacement or refund set forth
232
+ in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO OTHER
233
+ WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
234
+ WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
235
+
236
+ 1.F.5. Some states do not allow disclaimers of certain implied
237
+ warranties or the exclusion or limitation of certain types of damages.
238
+ If any disclaimer or limitation set forth in this agreement violates the
239
+ law of the state applicable to this agreement, the agreement shall be
240
+ interpreted to make the maximum disclaimer or limitation permitted by
241
+ the applicable state law. The invalidity or unenforceability of any
242
+ provision of this agreement shall not void the remaining provisions.
243
+
244
+ 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the
245
+ trademark owner, any agent or employee of the Foundation, anyone
246
+ providing copies of Project Gutenberg-tm electronic works in accordance
247
+ with this agreement, and any volunteers associated with the production,
248
+ promotion and distribution of Project Gutenberg-tm electronic works,
249
+ harmless from all liability, costs and expenses, including legal fees,
250
+ that arise directly or indirectly from any of the following which you do
251
+ or cause to occur: (a) distribution of this or any Project Gutenberg-tm
252
+ work, (b) alteration, modification, or additions or deletions to any
253
+ Project Gutenberg-tm work, and (c) any Defect you cause.
254
 
 
255
 
256
+ Section 2. Information about the Mission of Project Gutenberg-tm
257
+
258
+ Project Gutenberg-tm is synonymous with the free distribution of
259
+ electronic works in formats readable by the widest variety of computers
260
+ including obsolete, old, middle-aged and new computers. It exists
261
+ because of the efforts of hundreds of volunteers and donations from
262
+ people in all walks of life.
263
+
264
+ Volunteers and financial support to provide volunteers with the
265
+ assistance they need are critical to reaching Project Gutenberg-tm's
266
+ goals and ensuring that the Project Gutenberg-tm collection will
267
+ remain freely available for generations to come. In 2001, the Project
268
+ Gutenberg Literary Archive Foundation was created to provide a secure
269
+ and permanent future for Project Gutenberg-tm and future generations.
270
+ To learn more about the Project Gutenberg Literary Archive Foundation
271
+ and how your efforts and donations can help, see Sections 3 and 4
272
+ and the Foundation information page at www.gutenberg.org
273
+
274
+
275
+ Section 3. Information about the Project Gutenberg Literary Archive
276
+ Foundation
277
+
278
+ The Project Gutenberg Literary Archive Foundation is a non profit
279
+ 501(c)(3) educational corporation organized under the laws of the
280
+ state of Mississippi and granted tax exempt status by the Internal
281
+ Revenue Service. The Foundation's EIN or federal tax identification
282
+ number is 64-6221541. Contributions to the Project Gutenberg
283
+ Literary Archive Foundation are tax deductible to the full extent
284
+ permitted by U.S. federal laws and your state's laws.
285
+
286
+ The Foundation's principal office is located at 4557 Melan Dr. S.
287
+ Fairbanks, AK, 99712., but its volunteers and employees are scattered
288
+ throughout numerous locations. Its business office is located at 809
289
+ North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email
290
+ contact links and up to date contact information can be found at the
291
+ Foundation's web site and official page at www.gutenberg.org/contact
292
+
293
+ For additional contact information:
294
+ Dr. Gregory B. Newby
295
+ Chief Executive and Director
296
+ gbnewby@pglaf.org
297
+
298
+ Section 4. Information about Donations to the Project Gutenberg
299
+ Literary Archive Foundation
300
+
301
+ Project Gutenberg-tm depends upon and cannot survive without wide
302
+ spread public support and donations to carry out its mission of
303
+ increasing the number of public domain and licensed works that can be
304
+ freely distributed in machine readable form accessible by the widest
305
+ array of equipment including outdated equipment. Many small donations
306
+ ($1 to $5,000) are particularly important to maintaining tax exempt
307
+ status with the IRS.
308
+
309
+ The Foundation is committed to complying with the laws regulating
310
+ charities and charitable donations in all 50 states of the United
311
+ States. Compliance requirements are not uniform and it takes a
312
+ considerable effort, much paperwork and many fees to meet and keep up
313
+ with these requirements. We do not solicit donations in locations
314
+ where we have not received written confirmation of compliance. To
315
+ SEND DONATIONS or determine the status of compliance for any
316
+ particular state visit www.gutenberg.org/donate
317
+
318
+ While we cannot and do not solicit contributions from states where we
319
+ have not met the solicitation requirements, we know of no prohibition
320
+ against accepting unsolicited donations from donors in such states who
321
+ approach us with offers to donate.
322
+
323
+ International donations are gratefully accepted, but we cannot make
324
+ any statements concerning tax treatment of donations received from
325
+ outside the United States. U.S. laws alone swamp our small staff.
326
+
327
+ Please check the Project Gutenberg Web pages for current donation
328
+ methods and addresses. Donations are accepted in a number of other
329
+ ways including checks, online payments and credit card donations.
330
+ To donate, please visit: www.gutenberg.org/donate
331
+
332
+
333
+ Section 5. General Information About Project Gutenberg-tm electronic
334
+ works.
335
+
336
+ Professor Michael S. Hart was the originator of the Project Gutenberg-tm
337
+ concept of a library of electronic works that could be freely shared
338
+ with anyone. For forty years, he produced and distributed Project
339
+ Gutenberg-tm eBooks with only a loose network of volunteer support.
340
+
341
+ Project Gutenberg-tm eBooks are often created from several printed
342
+ editions, all of which are confirmed as Public Domain in the U.S.
343
+ unless a copyright notice is included. Thus, we do not necessarily
344
+ keep eBooks in compliance with any particular paper edition.
345
+
346
+ Most people start at our Web site which has the main PG search facility:
347
+
348
+ www.gutenberg.org
349
+
350
+ This Web site includes information about Project Gutenberg-tm,
351
+ including how to make donations to the Project Gutenberg Literary
352
+ Archive Foundation, how to help produce our new eBooks, and how to
353
+ subscribe to our email newsletter to hear about new eBooks.
354
+ ',
355
+ 'source-pretty': 'Gutenberg'
356
+ }
357
+ }
358
+ ```
359
+
360
+ ## Data Fields
361
+
362
+ - **id**: source-specific identifier.
363
+ - **text**: textual content of the document.
364
+ - **source**: source of the data.
365
+ - **added**: timestamp ai2 acquired this data.
366
+ - **created**": timestamp when original document was created (best-guess if not available)
367
+ - **metadata**: source-specific metadata.
368
+
369
+ ## License Information
370
  <details>
371
  <summary>Gutenberg License</summary>
372
  <p>
 
 
373
  *** START: FULL LICENSE ***
374
 
375
  THE FULL PROJECT GUTENBERG LICENSE
 
694
  including how to make donations to the Project Gutenberg Literary
695
  Archive Foundation, how to help produce our new eBooks, and how to
696
  subscribe to our email newsletter to hear about new eBooks.
 
697
 
698
  </p>
699
  </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/gutenberg/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 7211ebb972796ee921e5c9d19cc8a266cc42ccab560d1701464ff2a865268116
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/hest/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 14391, "average_document_length": 82950.79104996179, "number_of_tokens": 389325153, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/hest/hest.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Hestenettet (Danish debate forum)
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
@@ -11,92 +11,46 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Hestenettet
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Samples from the Danish debate forum www.heste-nettet.dk.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The forum have been in use since 1997 and it is used as a debate forum covering a wide range of everyday topics.
26
-
27
- Its inclusion as training data for large language models have multiple times reached [national news](https://www.dr.dk/nyheder/viden/teknologi/heste-nettet-kan-blive-grundlag-kunstig-intelligens-paa-dansk).
28
-
29
  ## Dataset Description
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 14.39K
35
- - **Number of tokens (Llama 3)**: 389.33M
36
- - **Average document length (characters)**: 82950.79
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "Er den ikke kær? \nJeg kan ikke forstå at der altid er nogle der åbenbart ser alle indlæg her på HN ,[...]",
49
- "source": "hest",
50
- "id": "hest_forum112802271280227_0",
51
- "added": "2020-10-05",
52
- "created": "2000-01-01, 2022-01-01",
53
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
54
- "domain": "Social Media",
55
- "metadata": {
56
- "source-pretty": "Hestenettet (Danish debate forum)"
57
- }
 
 
 
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
 
 
 
 
 
 
 
84
 
85
- ## Additional Information
 
 
 
 
86
 
87
-
88
- ### Citation Information
89
-
90
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
91
-
92
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
93
-
94
- ```bash
95
- @inproceedings{dagw,
96
- title = {{The Danish Gigaword Corpus}},
97
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
98
- year = 2021,
99
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
100
- publisher = {NEALT}
101
- }
102
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Hestenettet (Danish debate forum)
 
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 14391
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Er den ikke kær?
24
+ Jeg kan ikke forstå at der altid',
25
+ 'source': 'hest',
26
+ 'id': 'hest_forum112802271280227_0',
27
+ 'added': '2020-10-05',
28
+ 'created': '2000-01-01, 2022-01-01',
29
+ 'metadata': {
30
+ 'domain': 'Social Media',
31
+ 'license': 'Creative Commons Legal Code
32
+
33
+ CC0 1.0 Universal',
34
+ 'source-pretty': 'Hestenettet (Danish debate forum)'
35
+ }
36
  }
37
  ```
38
 
39
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
+ - **id**: source-specific identifier.
42
+ - **text**: textual content of the document.
43
+ - **source**: source of the data.
44
+ - **added**: timestamp ai2 acquired this data.
45
+ - **created**": timestamp when original document was created (best-guess if not available)
46
+ - **metadata**: source-specific metadata.
47
 
48
+ ## License Information
49
+ <details>
50
+ <summary>Creative Commons Zero v1.0 Universal</summary>
51
+ <p>
52
+ Creative Commons Legal Code
53
 
54
+ CC0 1.0 Universal
55
+ </p>
56
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
data/hest/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 721ef6123a43f89bca03351e7a6459d6e40906024bcd2bc9e0a1fa377c37d60b
  • Pointer size: 131 Bytes
  • Size of remote file: 545 kB
data/jvj/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 42, "average_document_length": 254893.66666666666, "number_of_tokens": 3549181, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/jvj/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 842b2aff42b3efabe2ec7dd425a9b41f836ca21f1f6332561dcc90e6bb7db62e
  • Pointer size: 131 Bytes
  • Size of remote file: 534 kB
data/jvj/jvj.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: Johannes V. Jensen
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,92 +11,41 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Johannes V. Jensen
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
-
26
-
27
-
28
-
29
  ## Dataset Description
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 42
35
- - **Number of tokens (Llama 3)**: 3.55M
36
- - **Average document length (characters)**: 254893.67
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "JØRGINE JØRGINE KØBENHAVN HAGE & CLAUSENS FORLAG (J. FR. CLAUSEN) 1926 JOHANNES V. JENSEN COPYRIGHT [...]",
49
- "source": "jvj",
50
- "id": "jvj_Jørgine",
51
- "added": "2020-06-26",
52
- "created": "1873-01-01, 1951-01-01",
53
- "license": "Attribution-ShareAlike 4.0 International",
54
- "domain": "Wiki & Books",
55
- "metadata": {
56
- "source-pretty": "Johannes V. Jensen (Danish poet)"
57
- }
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
- ## Additional Information
86
-
87
-
88
- ### Citation Information
89
-
90
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
91
-
92
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
93
-
94
- ```bash
95
- @inproceedings{dagw,
96
- title = {{The Danish Gigaword Corpus}},
97
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
98
- year = 2021,
99
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
100
- publisher = {NEALT}
101
- }
102
- ```
 
1
  ---
2
+ pretty_name: Johannes V. Jensen (Danish poet)
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Johannes V. Jensen (Danish poet)
 
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 42
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'JØRGINE JØRGINE KØBENHAVN HAGE & CLAUSENS FORLAG (',
24
+ 'source': 'jvj',
25
+ 'id': 'jvj_Jørgine',
26
+ 'added': '2020-06-26',
27
+ 'created': '1873-01-01, 1951-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Wiki & Books',
30
+ 'license': 'Attribution-ShareAlike 4.0 International',
31
+ 'source-pretty': 'Johannes V. Jensen (Danish poet)'
32
+ }
33
  }
34
  ```
35
 
36
+ ## Data Fields
37
+
38
+ - **id**: source-specific identifier.
39
+ - **text**: textual content of the document.
40
+ - **source**: source of the data.
41
+ - **added**: timestamp ai2 acquired this data.
42
+ - **created**": timestamp when original document was created (best-guess if not available)
43
+ - **metadata**: source-specific metadata.
44
+
45
+ ## License Information
46
+ <details>
47
+ <summary>Creative Commons Attribution Share Alike 4.0</summary>
48
+ <p>
49
+ Attribution-ShareAlike 4.0 International
50
+ </p>
51
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/create.py DELETED
@@ -1,78 +0,0 @@
1
- """download lexdk from alexandrainst/lexdk-open"""
2
-
3
- from datetime import datetime
4
- from pathlib import Path
5
- from typing import cast
6
-
7
- import pandas as pd
8
- from datasets import Dataset, load_dataset
9
-
10
- column_order = [
11
- "text",
12
- "source",
13
- "id",
14
- "added",
15
- "created",
16
- "license",
17
- "domain",
18
- "metadata",
19
- ]
20
-
21
-
22
- def convert_sample(example: dict) -> dict:
23
- # from sample:
24
- # {
25
- # "url": "https://denstoredanske.lex.dk/Kullmanns_M%C3%B8lle",
26
- # "title": "Kullmanns Mølle",
27
- # "clarification": "",
28
- # "authors": ["https://brugere.lex.dk/6929"],
29
- # "date": "2021-01-20T13:23:20+01:00",
30
- # "license": "fri anvendelse",
31
- # "text": "Kullmanns Mølle er en mølle i Gudhjem, opkaldt efter Matts Kullmann, der byggede møllen i 1893 til sin søn, Christian Kullmann, se Gudhjem Mølle.",
32
- # }
33
- date = datetime.fromisoformat(example["date"])
34
- text = f"{example["title"]}\n\npubliceret: {date}\n{example["text"]}"
35
-
36
- new_example = dict(
37
- text_new=text,
38
- id=example["url"],
39
- source="lexdk",
40
- domain="Conversation",
41
- license="cc-by-sa-4.0",
42
- added="2025-01-04",
43
- created=f"{date.date()}, {date.date()}",
44
- metadata={"source-pretty": "Lex.dk"},
45
- )
46
-
47
- return new_example
48
-
49
-
50
- def main():
51
- ds = load_dataset("alexandrainst/lexdk-open", split="train")
52
- ds = cast(Dataset, ds)
53
-
54
- dates = [datetime.fromisoformat(date).date() for date in ds["date"]]
55
- print(str(min(dates)), ",", str(max(dates))) # 2009-01-28, 2023-09-05
56
-
57
- assert len(set(ds["url"])) == len(ds)
58
-
59
- ds = ds.map(convert_sample, num_proc=4)
60
- ds = ds.select_columns(column_order[1:] + ["text_new"])
61
- ds = ds.rename_columns({"text_new": "text"})
62
- # ensure order
63
- ds = ds.select_columns(column_order)
64
-
65
- df = ds.to_pandas()
66
- df = cast(pd.DataFrame, df)
67
- dedup_df = df.drop_duplicates(keep="first", subset=["text"])
68
- print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 0
69
-
70
- ds = ds.select(dedup_df.index)
71
- assert len(set(ds["text"])) == len(ds)
72
-
73
- save_path = Path(__file__).parent / "lexdk.parquet"
74
- ds.to_parquet(save_path)
75
-
76
-
77
- if __name__ == "__main__":
78
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 11887, "average_document_length": 1405.6435601918063, "number_of_tokens": 5688613, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/lexdk/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 9aead97c97d52f9b4b9fced8eea7827d764a6a91f2af23ddc4e90607d23c0076
  • Pointer size: 131 Bytes
  • Size of remote file: 552 kB
data/lexdk/lexdk.md DELETED
@@ -1,85 +0,0 @@
1
- ---
2
- pretty_name: OpenSubtitles
3
- language:
4
- - da
5
- license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- source_datasets:
13
- - alexandrainst/lexdk-open
14
- ---
15
-
16
- # Dataset Card for OpenSubtitles
17
-
18
- <!-- START-SHORT DESCRIPTION -->
19
- Permissible use articles from [lex.dk](https://lex.dk).
20
- <!-- END-SHORT DESCRIPTION -->
21
-
22
- Lex.dk is a Danish online encyclopedia platform providing access to reliable and authoritative knowledge on a wide range of topics. It is created and curated by experts, ensuring high-quality, accurate content. The platform serves as a central hub for general and specialized information in Danish, making it a valuable resource for education, research, and general learning.
23
-
24
-
25
-
26
-
27
- ## Dataset Description
28
-
29
- <!-- START-DESC-STATS -->
30
- - **Language**: dan, dansk, Danish
31
- - **Number of samples**: 11.89K
32
- - **Number of tokens (Llama 3)**: 5.69M
33
- - **Average document length (characters)**: 1405.64
34
- <!-- END-DESC-STATS -->
35
-
36
-
37
- ## Dataset Structure
38
- An example from the dataset looks as follows.
39
-
40
- <!-- START-SAMPLE -->
41
- ```py
42
- {
43
- "text": "Oluf Høst Museet\n\npubliceret: 2014-04-23 03:42:33+02:00\nOluf Høst Museet, kunstmuseum i Gudhjem, Bor[...]",
44
- "source": "lexdk",
45
- "id": "https://denstoredanske.lex.dk/Oluf_H%C3%B8st_Museet",
46
- "added": "2025-01-04",
47
- "created": "2014-04-23, 2014-04-23",
48
- "license": "cc-by-sa-4.0",
49
- "domain": "Conversation",
50
- "metadata": {
51
- "source-pretty": "Lex.dk"
52
- }
53
- }
54
- ```
55
-
56
- ### Data Fields
57
-
58
- An entry in the dataset consists of the following fields:
59
-
60
- - `text`(`str`): The content of the document.
61
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
62
- - `id` (`str`): An unique identifier for each document.
63
- - `added` (`str`): An date for when the document was added to this collection.
64
- - `created` (`str`): An date range for when the document was originally created.
65
- - `license` (`str`): The license of the document. The licenses vary according to the source.
66
- - `domain` (`str`): The domain of the source
67
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
68
- - `metadata/*`: Potentially additional metadata
69
- <!-- END-SAMPLE -->
70
-
71
-
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
- ## Additional Information
81
-
82
-
83
- ### Citation Information
84
-
85
- This dataset is derived from the publicly availabe dataset [alexandrainst/lexdk-open](https://huggingface.co/datasets/alexandrainst/lexdk-open).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/lexdk.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c4779881f575d6f612c8603ed4896f10ebc7293c59637fa8a0773ee4545fce3
3
- size 10007743
 
 
 
 
data/naat/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 129, "average_document_length": 6832.387596899225, "number_of_tokens": 286677, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/naat/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e4f14416631cbf0b8a6fe2dc260e6be69155313af1f93c94bd435a60413e4836
  • Pointer size: 131 Bytes
  • Size of remote file: 537 kB
data/naat/naat.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: NAAT
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,87 +11,45 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for NAAT
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Danish speeches from 1930-2022.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
  ## Dataset Description
26
-
27
-
28
- <!-- START-DESC-STATS -->
29
- - **Language**: dan, dansk, Danish
30
- - **Number of samples**: 129
31
- - **Number of tokens (Llama 3)**: 286.68K
32
- - **Average document length (characters)**: 6832.39
33
- <!-- END-DESC-STATS -->
34
-
35
-
36
-
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
-
40
-
41
- <!-- START-SAMPLE -->
42
- ```py
43
  {
44
- "text": "Naar jeg i aften sender min nytaarshilsen til det danske folk og tænker tilbage paa det aar, der sva[...]",
45
- "source": "naat",
46
- "id": "naat_1958kongfrederikix",
47
- "added": "2020-02-11",
48
- "created": "1930-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Conversation",
51
- "metadata": {
52
- "source-pretty": "NAAT"
53
- }
 
 
54
  }
55
  ```
56
 
57
- ### Data Fields
58
 
59
- An entry in the dataset consists of the following fields:
 
 
 
 
 
60
 
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
- <!-- END-SAMPLE -->
71
 
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
- ## Additional Information
81
-
82
-
83
- ### Citation Information
84
-
85
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
86
-
87
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
88
-
89
- ```bash
90
- @inproceedings{dagw,
91
- title = {{The Danish Gigaword Corpus}},
92
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
93
- year = 2021,
94
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
95
- publisher = {NEALT}
96
- }
97
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for NAAT
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 129
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Naar jeg i aften sender min nytaarshilsen til det ',
24
+ 'source': 'naat',
25
+ 'id': 'naat_1958kongfrederikix',
26
+ 'added': '2020-02-11',
27
+ 'created': '1930-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Conversation',
30
+ 'license': 'Creative Commons Legal Code
31
+
32
+ CC0 1.0 Universal',
33
+ 'source-pretty': 'NAAT'
34
+ }
35
  }
36
  ```
37
 
38
+ ## Data Fields
39
 
40
+ - **id**: source-specific identifier.
41
+ - **text**: textual content of the document.
42
+ - **source**: source of the data.
43
+ - **added**: timestamp ai2 acquired this data.
44
+ - **created**": timestamp when original document was created (best-guess if not available)
45
+ - **metadata**: source-specific metadata.
46
 
47
+ ## License Information
48
+ <details>
49
+ <summary>Creative Commons Zero v1.0 Universal</summary>
50
+ <p>
51
+ Creative Commons Legal Code
 
 
 
 
 
52
 
53
+ CC0 1.0 Universal
54
+ </p>
55
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/nordjyllandnews/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 75219, "average_document_length": 1540.2673659580691, "number_of_tokens": 37905944, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/nordjyllandnews/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 96ed628dc507036a6b09c82a04b01fee2f79c78ece535f4890cb30db731525fb
  • Pointer size: 131 Bytes
  • Size of remote file: 560 kB
data/nordjyllandnews/nordjyllandnews.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Nordjylland News
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 10-100k
9
  task_categories:
@@ -11,77 +11,15 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - alexandrainst/nordjylland-news-summarization
16
  ---
17
 
18
  # Dataset Card for Nordjylland News
19
 
20
- <!-- START-SHORT DESCRIPTION -->
21
- Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The data is derived from the Huggingface dataset [alexandrainst/nordjylland-news-summarization](https://huggingface.co/datasets/alexandrainst/nordjylland-news-summarization) originally intended for text summarization.
26
 
27
  ## Dataset Description
28
-
29
-
30
- <!-- START-DESC-STATS -->
31
- - **Language**: dan, dansk, Danish
32
- - **Number of samples**: 75.22K
33
- - **Number of tokens (Llama 3)**: 37.91M
34
- - **Average document length (characters)**: 1540.27
35
- <!-- END-DESC-STATS -->
36
-
37
-
38
- ## Dataset Structure
39
- An example from the dataset looks as follows.
40
-
41
-
42
- <!-- START-SAMPLE -->
43
- ```py
44
- {
45
- "text": "Lav et referat af nedenstående tekst:\n\nTekst:\nOpdatering: Manden er nu fundet af Nordjyllands Politi[...]",
46
- "source": "nordjyllandnews",
47
- "id": "nordjyllandnews_0",
48
- "added": "2024-12-16",
49
- "created": "2000-01-01, 2024-01-01",
50
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
51
- "domain": "News",
52
- "metadata": {
53
- "source-pretty": "Nordjylland News"
54
- }
55
- }
56
- ```
57
-
58
- ### Data Fields
59
-
60
- An entry in the dataset consists of the following fields:
61
-
62
- - `text`(`str`): The content of the document.
63
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
64
- - `id` (`str`): An unique identifier for each document.
65
- - `added` (`str`): An date for when the document was added to this collection.
66
- - `created` (`str`): An date range for when the document was originally created.
67
- - `license` (`str`): The license of the document. The licenses vary according to the source.
68
- - `domain` (`str`): The domain of the source
69
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
70
- - `metadata/*`: Potentially additional metadata
71
- <!-- END-SAMPLE -->
72
-
73
-
74
- ### Dataset Statistics
75
-
76
- <!-- START-DATASET PLOTS -->
77
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
78
- <img>
79
- <!-- END-DATASET PLOTS -->
80
-
81
-
82
-
83
- ## Additional Information
84
-
85
 
86
  ## Opportunities for Improvement
87
 
@@ -89,7 +27,3 @@ An updated version of the this data could be fetched from their [API](https://de
89
 
90
  # Sourced data
91
  This dataset is derived from [`alexandrainst/nordjylland-news-summarization`](https://huggingface.co/datasets/alexandrainst/nordjylland-news-summarization)
92
-
93
- ### Citation Information
94
-
95
- No citation is applicable for this work. We recommend citing the huggingface repository.
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 10-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
 
16
  # Dataset Card for Nordjylland News
17
 
18
+ source: https://huggingface.co/datasets/alexandrainst/nordjylland-news-summarization
 
 
 
 
 
19
 
20
  ## Dataset Description
21
+ - **Number of records:** 75200
22
+ - **Languages:** Danish
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ## Opportunities for Improvement
25
 
 
27
 
28
  # Sourced data
29
  This dataset is derived from [`alexandrainst/nordjylland-news-summarization`](https://huggingface.co/datasets/alexandrainst/nordjylland-news-summarization)
 
 
 
 
data/opensubtitles/create.py DELETED
@@ -1,123 +0,0 @@
1
- from pathlib import Path
2
- from typing import cast
3
-
4
- import pandas as pd
5
- import spacy
6
- from datasets import Dataset, load_dataset
7
-
8
- # KCE: mail from Leon
9
- sample_to_redact = {
10
- # Der kommer en dag
11
- "opensub_6726481",
12
- "opensub_6732371",
13
- # Kollektivet
14
- "opensub_6645818",
15
- # Flaskepost fra P
16
- "opensub_6666922",
17
- "opensub_6720216",
18
- "opensub_6958711",
19
- # Fasandræberne
20
- "opensub_6036947",
21
- "opensub_6008622",
22
- # En du elsker
23
- "opensub_5828376",
24
- "opensub_5828378",
25
- # En chance til
26
- "opensub_6177523",
27
- # Lev stærkt
28
- "opensub_6467655",
29
- # Nymphomaniac
30
- "opensub_5604391",
31
- "opensub_5748340",
32
- "opensub_5748494",
33
- "opensub_5629516",
34
- # Kvinden i buret
35
- "opensub_5636248",
36
- "opensub_5514603",
37
- "opensub_5504932",
38
- # Den skaldede frisør
39
- "opensub_5084880",
40
- "opensub_5031826",
41
- # Jagten
42
- "opensub_6929419",
43
- "opensub_4885548",
44
- # Melancholia
45
- "opensub_4421330",
46
- "opensub_4406991",
47
- "opensub_4418817",
48
- # Ambassadøren
49
- "opensub_4557721",
50
- # Antichrist
51
- "opensub_5511502",
52
- "opensub_3938655",
53
- "opensub_3636940",
54
- "opensub_3564521",
55
- "opensub_3562215",
56
- # En kongelig affære
57
- "opensub_4725493",
58
- "opensub_4725160",
59
- "opensub_4725159",
60
- "opensub_4916871",
61
- "opensub_5186746",
62
- # Brødre
63
- "opensub_233943",
64
- "opensub_87475",
65
- }
66
-
67
- column_order = [
68
- "text",
69
- "source",
70
- "id",
71
- "added",
72
- "created",
73
- "license",
74
- "domain",
75
- "metadata",
76
- ]
77
-
78
-
79
- def convert_sample(example: dict) -> dict:
80
- text = example["text"]
81
- if example["doc_id"] in sample_to_redact:
82
- nlp = spacy.blank("da")
83
- doc = nlp(text)
84
- text = doc[:200].text # first 200 words
85
-
86
- new_example = dict(
87
- text_new=text,
88
- id=example["doc_id"],
89
- source="opensubtitles",
90
- domain="Conversation",
91
- license="Creative Commons Legal Code\n\nCC0 1.0 Universal",
92
- added="2025-01-02",
93
- created="1920-01-01, 2018-01-01", # assuming v2018
94
- metadata={"source-pretty": "OpenSubtitles"},
95
- )
96
-
97
- return new_example
98
-
99
-
100
- def main():
101
- ds = load_dataset("DDSC/partial-danish-gigaword-no-twitter", split="train")
102
- ds = cast(Dataset, ds)
103
- ds = ds.filter(lambda x: x["source"] == "opensub", num_proc=4)
104
- ds = ds.map(convert_sample, num_proc=4)
105
- ds = ds.select_columns(column_order[1:] + ["text_new"])
106
- ds = ds.rename_columns({"text_new": "text"})
107
- # ensure order
108
- ds = ds.select_columns(column_order)
109
-
110
- df = ds.to_pandas()
111
- df = cast(pd.DataFrame, df)
112
- dedup_df = df.drop_duplicates(keep="first", subset=["text"])
113
- print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 2422
114
-
115
- ds = ds.select(dedup_df.index)
116
- assert len(set(ds["text"])) == len(ds)
117
-
118
- save_path = Path(__file__).parent / "opensubtitles.parquet"
119
- ds.to_parquet(save_path)
120
-
121
-
122
- if __name__ == "__main__":
123
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/opensubtitles/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 29820, "average_document_length": 26298.017572099263, "number_of_tokens": 271599443, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/opensubtitles/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: cc0439ba8c58215d1cf1dcfa3dab4dd28c9f4d00065a44bba25757ee605f6425
  • Pointer size: 131 Bytes
  • Size of remote file: 550 kB
data/opensubtitles/opensubtitles.md DELETED
@@ -1,159 +0,0 @@
1
- ---
2
- pretty_name: OpenSubtitles
3
- language:
4
- - da
5
- license: cc0-1.0
6
- license_name: CC-0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- source_datasets:
13
- - DDSC/partial-danish-gigaword-no-twitter
14
- ---
15
-
16
- # Dataset Card for OpenSubtitles
17
-
18
- <!-- START-SHORT DESCRIPTION -->
19
- Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles).
20
- <!-- END-SHORT DESCRIPTION -->
21
-
22
-
23
- ## Dataset Description
24
-
25
- <!-- START-DESC-STATS -->
26
- - **Language**: dan, dansk, Danish
27
- - **Number of samples**: 29.82K
28
- - **Number of tokens (Llama 3)**: 271.60M
29
- - **Average document length (characters)**: 26298.02
30
- <!-- END-DESC-STATS -->
31
-
32
-
33
- ## Dataset Structure
34
- An example from the dataset looks as follows.
35
-
36
- <!-- START-SAMPLE -->
37
- ```py
38
- {
39
- "text": "Tidligere i vikingerne...\nJeg skal gå tilbage til England.\nBurde være gået tilbage for lang tid side[...]",
40
- "source": "opensubtitles",
41
- "id": "opensub_6822913",
42
- "added": "2025-01-02",
43
- "created": "1920-01-01, 2018-01-01",
44
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
45
- "domain": "Conversation",
46
- "metadata": {
47
- "source-pretty": "OpenSubtitles"
48
- }
49
- }
50
- ```
51
-
52
- ### Data Fields
53
-
54
- An entry in the dataset consists of the following fields:
55
-
56
- - `text`(`str`): The content of the document.
57
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
58
- - `id` (`str`): An unique identifier for each document.
59
- - `added` (`str`): An date for when the document was added to this collection.
60
- - `created` (`str`): An date range for when the document was originally created.
61
- - `license` (`str`): The license of the document. The licenses vary according to the source.
62
- - `domain` (`str`): The domain of the source
63
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
64
- - `metadata/*`: Potentially additional metadata
65
- <!-- END-SAMPLE -->
66
-
67
-
68
- ### Additional Processing
69
-
70
- Due to copyright concern additional documents have been removed due to copyright concerns. These include:
71
-
72
- ```py
73
- {
74
- # Der kommer en dag
75
- "opensub_6726481",
76
- "opensub_6732371",
77
- # Kollektivet
78
- "opensub_6645818",
79
- # Flaskepost fra P
80
- "opensub_6666922",
81
- "opensub_6720216",
82
- "opensub_6958711",
83
- # Fasandræberne
84
- "opensub_6036947",
85
- "opensub_6008622",
86
- # En du elsker
87
- "opensub_5828376",
88
- "opensub_5828378",
89
- # En chance til
90
- "opensub_6177523",
91
- # Lev stærkt
92
- "opensub_6467655",
93
- # Nymphomaniac
94
- "opensub_5604391",
95
- "opensub_5748340",
96
- "opensub_5748494",
97
- "opensub_5629516",
98
- # Kvinden i buret
99
- "opensub_5636248",
100
- "opensub_5514603",
101
- "opensub_5504932",
102
- # Den skaldede frisør
103
- "opensub_5084880",
104
- "opensub_5031826",
105
- # Jagten
106
- "opensub_6929419",
107
- "opensub_4885548",
108
- # Melancholia
109
- "opensub_4421330",
110
- "opensub_4406991",
111
- "opensub_4418817",
112
- # Ambassadøren
113
- "opensub_4557721",
114
- # Antichrist
115
- "opensub_5511502",
116
- "opensub_3938655",
117
- "opensub_3636940",
118
- "opensub_3564521",
119
- "opensub_3562215",
120
- # En kongelig affære
121
- "opensub_4725493",
122
- "opensub_4725160",
123
- "opensub_4725159",
124
- "opensub_4916871",
125
- "opensub_5186746",
126
- # Brødre
127
- "opensub_233943",
128
- "opensub_87475",
129
- }
130
- ```
131
-
132
- We have additionally removed duplicate entries from the original dataset.
133
-
134
- ### Dataset Statistics
135
-
136
- <!-- START-DATASET PLOTS -->
137
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
138
- <img>
139
- <!-- END-DATASET PLOTS -->
140
-
141
-
142
- ## Additional Information
143
-
144
-
145
- ### Citation Information
146
-
147
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
148
-
149
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
150
-
151
- ```bash
152
- @inproceedings{dagw,
153
- title = {{The Danish Gigaword Corpus}},
154
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
155
- year = 2021,
156
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
157
- publisher = {NEALT}
158
- }
159
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/opensubtitles/opensubtitles.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c80228f2095281e8e1ce2339a071873299dee2912f83706bf271ea782a94b39
3
- size 496269823
 
 
 
 
data/relig/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 66, "average_document_length": 53873.56060606061, "number_of_tokens": 1243970, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/relig/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 6d49a67bfbc7b886985a767045b17b229bca49c3024af37e693bffd711aa45cc
  • Pointer size: 131 Bytes
  • Size of remote file: 531 kB
data/relig/relig.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Religious texts
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,87 +11,46 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for Religious texts
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Danish religious text from the 1700-2022.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
  ## Dataset Description
26
-
27
-
28
- <!-- START-DESC-STATS -->
29
- - **Language**: dan, dansk, Danish
30
- - **Number of samples**: 66
31
- - **Number of tokens (Llama 3)**: 1.24M
32
- - **Average document length (characters)**: 53873.56
33
- <!-- END-DESC-STATS -->
34
-
35
-
36
-
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
-
40
-
41
- <!-- START-SAMPLE -->
42
- ```py
43
  {
44
- "text": "Salomos Højsang\nKys mig, giv mig Kys af din mund thi din Kærlighed er bedre end Vin.\nLifligt dufter [...]",
45
- "source": "relig",
46
- "id": "relig_SON",
47
- "added": "2020-09-14",
48
- "created": "1700-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Wiki & Books",
51
- "metadata": {
52
- "source-pretty": "Religious texts"
53
- }
 
 
 
54
  }
55
  ```
56
 
57
- ### Data Fields
58
 
59
- An entry in the dataset consists of the following fields:
 
 
 
 
 
60
 
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
- <!-- END-SAMPLE -->
71
 
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
- ## Additional Information
81
-
82
-
83
- ### Citation Information
84
-
85
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
86
-
87
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
88
-
89
- ```bash
90
- @inproceedings{dagw,
91
- title = {{The Danish Gigaword Corpus}},
92
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
93
- year = 2021,
94
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
95
- publisher = {NEALT}
96
- }
97
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Religious texts
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 66
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Salomos Højsang
24
+ Kys mig, giv mig Kys af din mund t',
25
+ 'source': 'relig',
26
+ 'id': 'relig_SON',
27
+ 'added': '2020-09-14',
28
+ 'created': '1700-01-01, 2022-01-01',
29
+ 'metadata': {
30
+ 'domain': 'Wiki & Books',
31
+ 'license': 'Creative Commons Legal Code
32
+
33
+ CC0 1.0 Universal',
34
+ 'source-pretty': 'Religious texts'
35
+ }
36
  }
37
  ```
38
 
39
+ ## Data Fields
40
 
41
+ - **id**: source-specific identifier.
42
+ - **text**: textual content of the document.
43
+ - **source**: source of the data.
44
+ - **added**: timestamp ai2 acquired this data.
45
+ - **created**": timestamp when original document was created (best-guess if not available)
46
+ - **metadata**: source-specific metadata.
47
 
48
+ ## License Information
49
+ <details>
50
+ <summary>Creative Commons Zero v1.0 Universal</summary>
51
+ <p>
52
+ Creative Commons Legal Code
 
 
 
 
 
53
 
54
+ CC0 1.0 Universal
55
+ </p>
56
+ </details>