Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
minor updates to paper
Browse files- .gitignore +3 -0
- paper/paper.md +55 -27
- test_results.log +2 -27
- uv.lock +1 -1
.gitignore
CHANGED
@@ -11,3 +11,6 @@ cspell.json
|
|
11 |
# tmp files
|
12 |
tmp.py
|
13 |
tmp.png
|
|
|
|
|
|
|
|
11 |
# tmp files
|
12 |
tmp.py
|
13 |
tmp.png
|
14 |
+
|
15 |
+
# MacOS
|
16 |
+
.DS_Store
|
paper/paper.md
CHANGED
@@ -1,39 +1,48 @@
|
|
1 |
-
#
|
2 |
|
3 |
Authors:
|
4 |
-
|
5 |
-
This list of authors to be invited for co-authorship
|
6 |
-
|
7 |
-
CHC
|
8 |
- Kenneth Enevoldsen
|
|
|
9 |
- Jan Kostkan
|
|
|
|
|
10 |
- Per
|
11 |
- Kristoffer Nielbo
|
12 |
-
- Marton
|
13 |
-
- Martin (gode CI tanker)
|
14 |
|
15 |
-
|
16 |
-
-
|
17 |
-
|
18 |
-
-
|
19 |
-
- Kristian
|
20 |
-
- Torben
|
21 |
-
|
22 |
-
DFM
|
23 |
-
- Bolette Pedersen (eller nogen fra hendes gruppe)
|
24 |
-
- Desmond
|
25 |
-
- Peter
|
26 |
|
27 |
-
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
|
31 |
# Abstract
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
dataset is
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
# Introduction
|
39 |
|
@@ -50,6 +59,22 @@ While it is in theory possible to open a PR on existing dataset, this practice i
|
|
50 |
|
51 |
Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent.
|
52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
## Related work
|
54 |
|
55 |
|
@@ -71,9 +96,10 @@ Existing projects on open-licensed data [@elutherAI]
|
|
71 |
|
72 |
We note that our approach is complementary to existing projects such as fineweb
|
73 |
|
74 |
-
### Continuous Integration
|
75 |
|
76 |
-
|
|
|
|
|
77 |
|
78 |
### Danish and Scandinavian Datasets
|
79 |
|
@@ -129,8 +155,10 @@ This lack of clarity increased the likelihood of dataset attacks such as dataset
|
|
129 |
|
130 |
- Machine generated content within training data: Not
|
131 |
|
|
|
|
|
132 |
|
133 |
-
Ethical and Environmental consideration
|
134 |
|
135 |
enviromental:
|
136 |
- common codebase lead to less duplication of dataset and reduces storage required
|
|
|
1 |
+
# Dynaword: Moving from One-shot to Continously Developed Datasets
|
2 |
|
3 |
Authors:
|
4 |
+
<!-- Current authors -->
|
|
|
|
|
|
|
5 |
- Kenneth Enevoldsen
|
6 |
+
- Kristian Nørgaaard Jensen
|
7 |
- Jan Kostkan
|
8 |
+
|
9 |
+
- Peter Bjørn Jørgensen
|
10 |
- Per
|
11 |
- Kristoffer Nielbo
|
|
|
|
|
12 |
|
13 |
+
<!--
|
14 |
+
Potential co-authors to invite
|
15 |
+
CHC:
|
16 |
+
- Marton
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
+
Alexandra
|
19 |
+
KU
|
20 |
+
SDU
|
21 |
+
Leon
|
22 |
+
Danish Royal Library
|
23 |
+
that guys from the treebank project
|
24 |
+
someone from DDSC
|
25 |
+
someone from Huggingface
|
26 |
+
-->
|
27 |
|
28 |
|
29 |
# Abstract
|
30 |
|
31 |
+
Large scale datasets are foundational for research and development in natural language processing and related fields and good datasets
|
32 |
+
often require multiple iterations to improve and adjust.
|
33 |
+
Despite this we see many releases of static datasets rather than intended continually expanding resourced, thus preventing community
|
34 |
+
contributions and expansion. Even when a large-scale dataset see versioned releases the filtering and quality assurance is often only done by the
|
35 |
+
team releasing the data.
|
36 |
+
And while we have seen impressive large-scale released these are often derived from Common crawl or related sources which is likely to contain
|
37 |
+
copyrighted data that does not support the stated license of the release. This restricts not only the use of the data, but also its derivates, such as
|
38 |
+
annotated data and language models.
|
39 |
+
In an attempt to remedy this shortcoming we developed Danish Dynaword. An illustrative example of how large-scale datasets can be developed. This dynawords contain more than 2x as many tokens as comparable releases, is restricted to strictly permissible licenses data and have seen multipl contributions across industry and research.
|
40 |
+
This dataset comes equipped with CI to ensure data format, quality, and high documentation standards than can be run in a developer-friendly
|
41 |
+
enviroments in under 10 minutes.
|
42 |
+
Along with this release we have additionally started dynawords projects for Norwegian, Swedish, Faroese, Icelandic.
|
43 |
+
<!-- Candidate abstract - will probably change -->
|
44 |
+
|
45 |
+
dataset is available at: https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
|
46 |
|
47 |
# Introduction
|
48 |
|
|
|
59 |
|
60 |
Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent.
|
61 |
|
62 |
+
## What is a Dynaword
|
63 |
+
|
64 |
+
A dynaword is a continously developed dataset resource intended a variety of downstream use-cases within natural language processing. Dynaword does intend to replace existing large scale releases such as fine-web [@fineweb], OSCAR [@OSCAR], or HLPT [@hplt], but rather
|
65 |
+
complement these in situation where clearly licensed dataset might be preferred. Some of these cases for example include:
|
66 |
+
|
67 |
+
- Clearly license datasets lends itself to better to derivative providing good starting points for permissibly annotated datasets.
|
68 |
+
- EUs AI-act also poses requirement on the training data used for model training
|
69 |
+
- The EUs AI act makes the distributor of a model responsible for copyright violations and thus companies might prefer models derived from clearly permissible data.
|
70 |
+
<!-- probably other cases -->
|
71 |
+
|
72 |
+
### Continuous Development of large Scale datasets
|
73 |
+
|
74 |
+
Cont
|
75 |
+
|
76 |
+
### Design Considerations
|
77 |
+
|
78 |
## Related work
|
79 |
|
80 |
|
|
|
96 |
|
97 |
We note that our approach is complementary to existing projects such as fineweb
|
98 |
|
|
|
99 |
|
100 |
+
|
101 |
+
|
102 |
+
<!-- add stuff on data ops -->
|
103 |
|
104 |
### Danish and Scandinavian Datasets
|
105 |
|
|
|
155 |
|
156 |
- Machine generated content within training data: Not
|
157 |
|
158 |
+
- Often we are interested in high-quality data when training an LLM. However the presented dynaword only performs a minimal level of cleaning. While
|
159 |
+
this is a deliberate decision as certain model choices might warrant for different cleaning approaches. This could leave a substantial level of post-processing to the user of the dataset.
|
160 |
|
161 |
+
Ethical and Environmental consideration
|
162 |
|
163 |
enviromental:
|
164 |
- common codebase lead to less duplication of dataset and reduces storage required
|
test_results.log
CHANGED
@@ -7,32 +7,7 @@ collected 96 items
|
|
7 |
src/tests/test_dataset_schema.py ....................................... [ 40%]
|
8 |
.............................. [ 71%]
|
9 |
src/tests/test_duplicates.py ssssssssssssssssssssssss [ 96%]
|
10 |
-
src/tests/test_load.py
|
11 |
src/tests/test_unique_ids.py . [100%]
|
12 |
|
13 |
-
|
14 |
-
__________________________ test_all_datasets_in_yaml ___________________________
|
15 |
-
|
16 |
-
repo_path = PosixPath('/Users/au561649/Github/danish-dynaword')
|
17 |
-
|
18 |
-
def test_all_datasets_in_yaml(repo_path: Path):
|
19 |
-
frontmatter, _ = read_frontmatter_and_body(repo_path / "README.md")
|
20 |
-
|
21 |
-
ds_names = {
|
22 |
-
cfg["config_name"]
|
23 |
-
for cfg in frontmatter["configs"]
|
24 |
-
if cfg["config_name"] != "default"
|
25 |
-
}
|
26 |
-
|
27 |
-
data_folder = repo_path / "data"
|
28 |
-
datasets = data_folder.glob("*")
|
29 |
-
|
30 |
-
for dataset in datasets:
|
31 |
-
> assert dataset.name in ds_names
|
32 |
-
E AssertionError: assert 'miljoeportalen' in {'adl', 'botxt', 'dannet', 'depbank', 'ep', 'ft', ...}
|
33 |
-
E + where 'miljoeportalen' = PosixPath('/Users/au561649/Github/danish-dynaword/data/miljoeportalen').name
|
34 |
-
|
35 |
-
src/tests/test_load.py:29: AssertionError
|
36 |
-
=========================== short test summary info ============================
|
37 |
-
FAILED src/tests/test_load.py::test_all_datasets_in_yaml - AssertionError: as...
|
38 |
-
================== 1 failed, 71 passed, 24 skipped in 14.54s ===================
|
|
|
7 |
src/tests/test_dataset_schema.py ....................................... [ 40%]
|
8 |
.............................. [ 71%]
|
9 |
src/tests/test_duplicates.py ssssssssssssssssssssssss [ 96%]
|
10 |
+
src/tests/test_load.py .. [ 98%]
|
11 |
src/tests/test_unique_ids.py . [100%]
|
12 |
|
13 |
+
======================= 72 passed, 24 skipped in 10.63s ========================
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
uv.lock
CHANGED
@@ -275,7 +275,7 @@ wheels = [
|
|
275 |
|
276 |
[[package]]
|
277 |
name = "danish-dynaword"
|
278 |
-
version = "1.0.
|
279 |
source = { virtual = "." }
|
280 |
dependencies = [
|
281 |
{ name = "datasets" },
|
|
|
275 |
|
276 |
[[package]]
|
277 |
name = "danish-dynaword"
|
278 |
+
version = "1.0.7"
|
279 |
source = { virtual = "." }
|
280 |
dependencies = [
|
281 |
{ name = "datasets" },
|