Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
100K - 1M
License:
added
#10
by
KennethEnevoldsen
- opened
- .gitignore +0 -6
- CONTRIBUTING.md +0 -56
- README.md +138 -100
- data/adl/adl.parquet +2 -2
- data/botxt/botxt.parquet +2 -2
- data/dannet/dannet.parquet +2 -2
- data/depbank/depbank.parquet +2 -2
- data/ep/ep.parquet +2 -2
- data/ft/ft.parquet +2 -2
- data/gutenberg/gutenberg.parquet +2 -2
- data/hest/hest.parquet +2 -2
- data/jvj/jvj.parquet +2 -2
- data/naat/naat.parquet +2 -2
- data/relig/relig.parquet +2 -2
- data/retsinformationdk/retsinformationdk.parquet +2 -2
- data/retspraksis/retspraksis.parquet +2 -2
- data/skat/skat.parquet +2 -2
- data/spont/spont.parquet +2 -2
- data/synne/synne.parquet +2 -2
- data/tv2r/tv2r.parquet +2 -2
- data/wiki/wiki.parquet +2 -2
- data/wikibooks/wikibooks.parquet +2 -2
- data/wikisource/wikisource.parquet +2 -2
- paper/paper.md +0 -136
- paper/references.bib +0 -25
- pyproject.toml +4 -4
- scripts/bump_version.py +4 -3
- scripts/load_dataset.py +6 -0
- uv.lock +8 -8
.gitignore
CHANGED
@@ -1,9 +1,3 @@
|
|
1 |
# Python
|
2 |
__pycache__/*
|
3 |
*.pyc
|
4 |
-
|
5 |
-
# cSpell
|
6 |
-
cspell.json
|
7 |
-
|
8 |
-
# tmp files
|
9 |
-
tmp.py
|
|
|
1 |
# Python
|
2 |
__pycache__/*
|
3 |
*.pyc
|
|
|
|
|
|
|
|
|
|
|
|
CONTRIBUTING.md
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
## Working with dataset locally
|
2 |
-
|
3 |
-
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
|
4 |
-
|
5 |
-
```bash
|
6 |
-
git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
|
7 |
-
cd danish-gigaword-2
|
8 |
-
```
|
9 |
-
|
10 |
-
You can the work with the dataset locally like so:
|
11 |
-
|
12 |
-
```py
|
13 |
-
from datasets import load_dataset
|
14 |
-
|
15 |
-
name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
|
16 |
-
dataset = load_dataset("../.", split="train")
|
17 |
-
# make transformations here
|
18 |
-
```
|
19 |
-
|
20 |
-
> Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
|
21 |
-
|
22 |
-
## Installing dependencies
|
23 |
-
|
24 |
-
This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
|
25 |
-
|
26 |
-
```bash
|
27 |
-
make install
|
28 |
-
```
|
29 |
-
|
30 |
-
## Running dataset tests
|
31 |
-
|
32 |
-
This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
|
33 |
-
|
34 |
-
```bash
|
35 |
-
make test
|
36 |
-
```
|
37 |
-
|
38 |
-
## Submitting a PR
|
39 |
-
|
40 |
-
Creating a PR on Huggingface is a bit different from creating one on Github.
|
41 |
-
|
42 |
-
1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
|
43 |
-
|
44 |
-
```bash
|
45 |
-
git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
|
46 |
-
git checkout pr/{PR NUMBER}
|
47 |
-
# make your changes here
|
48 |
-
# push to hub
|
49 |
-
git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
|
50 |
-
```
|
51 |
-
|
52 |
-
Before you make the PR do be sure to make sure that the tests have been run.
|
53 |
-
|
54 |
-
To see example PR you can see the following:
|
55 |
-
|
56 |
-
- [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions/11)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -99,95 +99,59 @@ task_categories:
|
|
99 |
- text-generation
|
100 |
task_ids:
|
101 |
- language-modeling
|
102 |
-
pretty_name: Danish
|
103 |
language_bcp47:
|
104 |
- da
|
105 |
- da-bornholm
|
106 |
- da-synnejyl
|
107 |
---
|
108 |
|
109 |
-
|
110 |
-
readme structure is inspired by:
|
111 |
-
https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
|
112 |
-
|
113 |
-
# 🧨 Danish Dynaword
|
114 |
|
115 |
-
|
116 |
-
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
117 |
-
| **Language** | dan, dansk, Danish |
|
118 |
-
| **License** | Permissible, See the respective dataset |
|
119 |
-
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
120 |
-
| **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions) |
|
121 |
|
|
|
122 |
|
123 |
## Table of Contents
|
124 |
-
- [
|
125 |
- [Table of Contents](#table-of-contents)
|
126 |
- [Dataset Description](#dataset-description)
|
127 |
- [Dataset Summary](#dataset-summary)
|
128 |
- [Loading the dataset](#loading-the-dataset)
|
129 |
-
- [Languages:](#languages)
|
130 |
- [Dataset Structure](#dataset-structure)
|
131 |
- [Data Instances](#data-instances)
|
132 |
- [Data Fields](#data-fields)
|
133 |
- [Data Splits](#data-splits)
|
134 |
- [Dataset Creation](#dataset-creation)
|
135 |
-
- [Curation Rationale](#curation-rationale)
|
136 |
-
- [Annotations](#annotations)
|
137 |
- [Source Data](#source-data)
|
138 |
- [Additional Information](#additional-information)
|
139 |
-
- [Contributing to the dataset](#contributing-to-the-dataset)
|
140 |
- [Citation Information](#citation-information)
|
141 |
|
142 |
## Dataset Description
|
143 |
|
|
|
144 |
|
145 |
### Dataset Summary
|
146 |
|
147 |
-
The Danish
|
148 |
-
|
149 |
|
150 |
### Loading the dataset
|
151 |
|
152 |
```py
|
153 |
from datasets import load_dataset
|
154 |
|
155 |
-
name = "danish-foundation-models/danish-
|
156 |
ds = load_dataset(name, split = "train")
|
157 |
sample = ds[1] # see "Data Instances" below
|
158 |
-
```
|
159 |
|
160 |
-
or load
|
161 |
-
```py
|
162 |
ds = load_dataset(name, split = "train", streaming=True)
|
163 |
-
|
164 |
-
sample = next(iter(dataset_iter))
|
165 |
```
|
166 |
|
167 |
-
You can also load a single subset at a time:
|
168 |
-
```py
|
169 |
-
ds = load_dataset(name, "adl", split = "train")
|
170 |
-
```
|
171 |
-
|
172 |
-
|
173 |
-
As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision:
|
174 |
-
You can also load a single subset at a time:
|
175 |
-
```py
|
176 |
-
ds = load_dataset(name, revision="{desired revision}")
|
177 |
-
```
|
178 |
-
|
179 |
-
### Languages:
|
180 |
-
This dataset includes the following languages:
|
181 |
-
|
182 |
-
- dan-Latn
|
183 |
-
- dan-Latn-bornholm
|
184 |
-
- dan-Latn-synnejyl
|
185 |
-
|
186 |
-
Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
|
187 |
-
|
188 |
## Dataset Structure
|
189 |
|
190 |
-
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
|
191 |
|
192 |
### Data Instances
|
193 |
|
@@ -195,14 +159,15 @@ Each entry in the dataset consists of a single text with associated metadata
|
|
195 |
|
196 |
```py
|
197 |
{
|
198 |
-
|
199 |
-
|
200 |
-
|
201 |
-
|
202 |
-
|
203 |
-
|
204 |
-
|
205 |
-
|
|
|
206 |
}
|
207 |
```
|
208 |
|
@@ -212,13 +177,12 @@ An entry in the dataset consists of the following fields:
|
|
212 |
|
213 |
- `text`(`str`): The content of the document.
|
214 |
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
215 |
-
- `id` (`str`): An unique
|
216 |
- `added` (`str`): An date for when the document was added to this collection.
|
217 |
- `created` (`str`): An date range for when the document was originally created.
|
218 |
-
- `license` (`str`): The license of the document. The licenses vary according to the source.
|
219 |
-
- `domain` (`str`): The domain of the source
|
220 |
-
- `metadata/source-pretty` (`str`): The
|
221 |
-
- `metadata/*`: Potentially additional metadata
|
222 |
|
223 |
|
224 |
### Data Splits
|
@@ -227,54 +191,128 @@ The entire corpus is provided in the `train` split.
|
|
227 |
|
228 |
## Dataset Creation
|
229 |
|
230 |
-
### Curation Rationale
|
231 |
-
|
232 |
-
These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
|
233 |
-
|
234 |
-
### Annotations
|
235 |
-
|
236 |
-
This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.
|
237 |
-
|
238 |
### Source Data
|
239 |
|
240 |
Below follows a brief overview of the sources in the corpus along with their individual license.
|
241 |
|
242 |
-
| Source | License
|
243 |
-
| ----------------- |
|
244 |
-
| adl | Creative Commons Legal Code 1.0 Universal
|
245 |
-
| botxt | Creative Commons Legal Code 1.0 Universal
|
246 |
-
| dannet | [dannet license]
|
247 |
-
| depbank | Attribution-ShareAlike 4.0 International
|
248 |
-
| ep | Creative Commons Legal Code 1.0 Universal
|
249 |
-
| ft | Creative Commons Legal Code 1.0 Universal
|
250 |
-
| gutenberg | [gutenberg license]
|
251 |
-
| hest | Creative Commons Legal Code 1.0 Universal
|
252 |
-
| jvj | Attribution-ShareAlike 4.0 International
|
253 |
-
| naat | Creative Commons Legal Code 1.0 Universal
|
254 |
-
| relig | Creative Commons Legal Code 1.0 Universal
|
255 |
-
| retsinformationdk |
|
256 |
-
| retspraksis | Creative Commons Legal Code 1.0 Universal
|
257 |
-
| skat | Creative Commons Legal Code 1.0 Universal
|
258 |
-
| spont | Creative Commons Legal Code 1.0 Universal
|
259 |
-
| synne | Creative Commons Legal Code 1.0 Universal
|
260 |
-
| tv2r |
|
261 |
-
| wiki | Creative Commons Legal Code 1.0 Universal
|
262 |
-
| wikibooks | Creative Commons Legal Code 1.0 Universal
|
263 |
-
| wikisource | Creative Commons Legal Code 1.0 Universal
|
264 |
-
|
265 |
-
|
266 |
-
|
267 |
-
|
268 |
-
|
269 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
270 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
271 |
|
272 |
## Additional Information
|
273 |
|
274 |
-
### Contributing to the dataset
|
275 |
-
|
276 |
-
We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
|
277 |
|
278 |
### Citation Information
|
279 |
|
280 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
- text-generation
|
100 |
task_ids:
|
101 |
- language-modeling
|
102 |
+
pretty_name: Danish Gigaword
|
103 |
language_bcp47:
|
104 |
- da
|
105 |
- da-bornholm
|
106 |
- da-synnejyl
|
107 |
---
|
108 |
|
109 |
+
# Danish Gigaword 2
|
|
|
|
|
|
|
|
|
110 |
|
111 |
+
*Version*: 2.0.0
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
+
*License*: See the respective dataset
|
114 |
|
115 |
## Table of Contents
|
116 |
+
- [Danish Gigaword 2](#danish-gigaword-2)
|
117 |
- [Table of Contents](#table-of-contents)
|
118 |
- [Dataset Description](#dataset-description)
|
119 |
- [Dataset Summary](#dataset-summary)
|
120 |
- [Loading the dataset](#loading-the-dataset)
|
|
|
121 |
- [Dataset Structure](#dataset-structure)
|
122 |
- [Data Instances](#data-instances)
|
123 |
- [Data Fields](#data-fields)
|
124 |
- [Data Splits](#data-splits)
|
125 |
- [Dataset Creation](#dataset-creation)
|
|
|
|
|
126 |
- [Source Data](#source-data)
|
127 |
- [Additional Information](#additional-information)
|
|
|
128 |
- [Citation Information](#citation-information)
|
129 |
|
130 |
## Dataset Description
|
131 |
|
132 |
+
This is intended as a second version of the Danish Gigaword corpus. It is intended to be continually updated with new data sources. This is currently a work in progress.
|
133 |
|
134 |
### Dataset Summary
|
135 |
|
136 |
+
The Danish Gigaword Corpus contains text spanning several domains and forms.
|
|
|
137 |
|
138 |
### Loading the dataset
|
139 |
|
140 |
```py
|
141 |
from datasets import load_dataset
|
142 |
|
143 |
+
name = "danish-foundation-models/danish-gigaword"
|
144 |
ds = load_dataset(name, split = "train")
|
145 |
sample = ds[1] # see "Data Instances" below
|
|
|
146 |
|
147 |
+
# or load by streaming the data
|
|
|
148 |
ds = load_dataset(name, split = "train", streaming=True)
|
149 |
+
sample = next(iter(ds))
|
|
|
150 |
```
|
151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
152 |
## Dataset Structure
|
153 |
|
154 |
+
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information.
|
155 |
|
156 |
### Data Instances
|
157 |
|
|
|
159 |
|
160 |
```py
|
161 |
{
|
162 |
+
'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.',
|
163 |
+
'source': 'wiki',
|
164 |
+
'id': 'wiki_366127',
|
165 |
+
'added': '2021-03-28',
|
166 |
+
'created': '2019-01-01, 2021-01-01',
|
167 |
+
'metadata':
|
168 |
+
{'domain': 'Wiki & Books',
|
169 |
+
'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
|
170 |
+
}
|
171 |
}
|
172 |
```
|
173 |
|
|
|
177 |
|
178 |
- `text`(`str`): The content of the document.
|
179 |
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
180 |
+
- `id` (`str`): An unique identifer for each document.
|
181 |
- `added` (`str`): An date for when the document was added to this collection.
|
182 |
- `created` (`str`): An date range for when the document was originally created.
|
183 |
+
- `metadata/license` (`str`): The license of the document. The licenses vary according to the source.
|
184 |
+
- `metadata/domain` (`str`): The domain of the source
|
185 |
+
- `metadata/source-pretty` (`str`): The longform version of the short-form source name
|
|
|
186 |
|
187 |
|
188 |
### Data Splits
|
|
|
191 |
|
192 |
## Dataset Creation
|
193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
194 |
### Source Data
|
195 |
|
196 |
Below follows a brief overview of the sources in the corpus along with their individual license.
|
197 |
|
198 |
+
| Source | License |
|
199 |
+
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
200 |
+
| adl | Creative Commons Legal Code 1.0 Universal |
|
201 |
+
| botxt | Creative Commons Legal Code 1.0 Universal |
|
202 |
+
| dannet | [dannet license](https://cst.ku.dk/projekter/dannet/license.txt) |
|
203 |
+
| depbank | Attribution-ShareAlike 4.0 International |
|
204 |
+
| ep | Creative Commons Legal Code 1.0 Universal |
|
205 |
+
| ft | Creative Commons Legal Code 1.0 Universal |
|
206 |
+
| gutenberg | [gutenberg license](https://www.gutenberg.org/policy/license.html) |
|
207 |
+
| hest | Creative Commons Legal Code 1.0 Universal |
|
208 |
+
| jvj | Attribution-ShareAlike 4.0 International |
|
209 |
+
| naat | Creative Commons Legal Code 1.0 Universal |
|
210 |
+
| relig | Creative Commons Legal Code 1.0 Universal |
|
211 |
+
| retsinformationdk | Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler." |
|
212 |
+
| retspraksis | Creative Commons Legal Code 1.0 Universal |
|
213 |
+
| skat | Creative Commons Legal Code 1.0 Universal |
|
214 |
+
| spont | Creative Commons Legal Code 1.0 Universal |
|
215 |
+
| synne | Creative Commons Legal Code 1.0 Universal |
|
216 |
+
| tv2r | The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International |
|
217 |
+
| wiki | Creative Commons Legal Code 1.0 Universal |
|
218 |
+
| wikibooks | Creative Commons Legal Code 1.0 Universal |
|
219 |
+
| wikisource | Creative Commons Legal Code 1.0 Universal |
|
220 |
+
|
221 |
+
These sources corresponds to the following top-level domains in the dataset:
|
222 |
+
```python
|
223 |
+
# mapping from domain to top-level domain
|
224 |
+
domain_mapping_dict = {
|
225 |
+
"retsinformationdk": "Legal",
|
226 |
+
"skat": "Legal",
|
227 |
+
"retspraksis": "Legal",
|
228 |
+
"hest": "Social Media",
|
229 |
+
"cc": "Web",
|
230 |
+
"adl": "Wiki & Books",
|
231 |
+
"botxt": "Other",
|
232 |
+
"danavis": "News",
|
233 |
+
"dannet": "dannet",
|
234 |
+
"depbank": "Other",
|
235 |
+
"ep": "Conversation",
|
236 |
+
"ft": "Conversation",
|
237 |
+
"gutenberg": "Wiki & Books",
|
238 |
+
"jvj": "Wiki & Books",
|
239 |
+
"naat": "Conversation",
|
240 |
+
"opensub": "Conversation",
|
241 |
+
"relig": "Wiki & Books",
|
242 |
+
"spont": "Conversation",
|
243 |
+
"synne": "Other",
|
244 |
+
"tv2r": "News",
|
245 |
+
"wiki": "Wiki & Books",
|
246 |
+
"wikibooks": "Wiki & Books",
|
247 |
+
"wikisource": "Wiki & Books",
|
248 |
+
"twfv19": "Social Media", # not present in this version of the dataset
|
249 |
+
}
|
250 |
+
```
|
251 |
|
252 |
+
And the following mapping translates between the short form and the long form of the source name
|
253 |
+
```python
|
254 |
+
# mapping from domain to its long name format
|
255 |
+
longname_mapping_dict = {
|
256 |
+
"retsinformationdk": "retsinformation.dk (Danish legal information)",
|
257 |
+
"skat": "Skat (Danish tax authority)",
|
258 |
+
"retspraksis": "retspraksis (Danish legal information)",
|
259 |
+
"hest": "Hestenettet (Danish debate forum)",
|
260 |
+
"cc": "Common Crawl",
|
261 |
+
"adl": " Archive for Danish Literature",
|
262 |
+
"botxt": "Bornholmsk (Danish dialect)",
|
263 |
+
"danavis": "Danish daily newspapers",
|
264 |
+
"dannet": "DanNet (Danish WordNet)",
|
265 |
+
"depbank": "Danish Dependency Treebank",
|
266 |
+
"ep": "European Parliament",
|
267 |
+
"ft": "Folketinget (Danish Parliament)",
|
268 |
+
"gutenberg": "Gutenberg",
|
269 |
+
"jvj": "Johannes V. Jensen (Danish author/poet)",
|
270 |
+
"naat": "NAAT",
|
271 |
+
"opensub": "Open Subtitles",
|
272 |
+
"relig": "Religious texts",
|
273 |
+
"spont": "Spontaneous speech",
|
274 |
+
"synne": "Synderjysk (Danish dialect)",
|
275 |
+
"tv2r": "TV 2 Radio (Danish news)",
|
276 |
+
"wiki": "Wikipedia",
|
277 |
+
"wikibooks": "Wikibooks",
|
278 |
+
"wikisource": "Wikisource",
|
279 |
+
"twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
|
280 |
+
}
|
281 |
+
```
|
282 |
|
283 |
## Additional Information
|
284 |
|
|
|
|
|
|
|
285 |
|
286 |
### Citation Information
|
287 |
|
288 |
+
The original version of Danish Gigawords was created as a part of the following publication.
|
289 |
+
|
290 |
+
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
291 |
+
|
292 |
+
```
|
293 |
+
@inproceedings{dagw,
|
294 |
+
title = {{The Danish Gigaword Corpus}},
|
295 |
+
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
296 |
+
year = 2021,
|
297 |
+
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
298 |
+
publisher = {NEALT}
|
299 |
+
}
|
300 |
+
```
|
301 |
+
|
302 |
+
|
303 |
+
<!--
|
304 |
+
Todo:
|
305 |
+
|
306 |
+
add tests
|
307 |
+
- unique ids
|
308 |
+
- valid metadata
|
309 |
+
|
310 |
+
add ci:
|
311 |
+
- summary statistics
|
312 |
+
- tables
|
313 |
+
|
314 |
+
prettify:
|
315 |
+
- license as independent column
|
316 |
+
- ensure pretty_name is standard
|
317 |
+
- potentially remove some columns
|
318 |
+
-->
|
data/adl/adl.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d51c291d1cf6461a1e59dd45dfd63ee39a5c62cd3c2fd05877489d50aaa5115e
|
3 |
+
size 106409966
|
data/botxt/botxt.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b42642896dfda21b23bb8e8ef5ba65f878ebfa5fec2f6d57aec1e06778c75bbf
|
3 |
+
size 1353171
|
data/dannet/dannet.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:905c2441a4c242e24d370775e9e035df3c67a7a1d797a615297cb6a1bbf51a96
|
3 |
+
size 4743422
|
data/depbank/depbank.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:863aac5735bee6995b665864ea355b488e35bb2cca696ea340d8febc653b8886
|
3 |
+
size 394917
|
data/ep/ep.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85c8eb6954522c757ee3e410f7f277a74ecedd8e7507ef00a698a654dc8bea20
|
3 |
+
size 171150568
|
data/ft/ft.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:31775c6e84a1542897641712e39d4c6cde2aa69673d7875c6a39f3148c08e0fb
|
3 |
+
size 182049520
|
data/gutenberg/gutenberg.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:973df5121d3da73a5915f6dd1da0290ffbaece92b2c7c4dec562155974c0076f
|
3 |
+
size 12361984
|
data/hest/hest.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9b85d658074ebec3eb95da8f8e522d83707b646b5f3b8b706279496eec3b31c3
|
3 |
+
size 748670544
|
data/jvj/jvj.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7a524aafe8fe1ba86bc09c091b10aacf55e558124fef59e68f60bed03816636a
|
3 |
+
size 6829395
|
data/naat/naat.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6958784a0c4039e9357dee0dedc6bd010e7dd3573d2d9a4db45ce5e4a6608feb
|
3 |
+
size 545253
|
data/relig/relig.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ba59db9efa6756fd6306380c39e9f25b50c99ddb6b7c0c2391e417d95d0af6da
|
3 |
+
size 2003050
|
data/retsinformationdk/retsinformationdk.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69df3e71d482c746854535710ffb57c9ba3c9ac633931222e8be84d0e67cc22c
|
3 |
+
size 651256719
|
data/retspraksis/retspraksis.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:28f86c894204d6c1348a5fdfae7b69d1d355ba311e42d70fd669d52138b95d3a
|
3 |
+
size 87674092
|
data/skat/skat.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5f87f38f90553725c889080b3def8e24dadd3b2eaee28b43bae2a19493cf2143
|
3 |
+
size 165069920
|
data/spont/spont.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ac515b1dedc78fb9123bffbab2cf3c0fe1e126a070ad342d7d0c707096e838b
|
3 |
+
size 1814921
|
data/synne/synne.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:701bf010bca88dd4722ffa72404b91e24703bd9552003371771bf1823dc58138
|
3 |
+
size 77042
|
data/tv2r/tv2r.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e5cc87b9de1c11ef580d939d1d877b5553d3c75aa33d0f2e280986b8787900a5
|
3 |
+
size 40686259
|
data/wiki/wiki.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:41bb02c5b10290746b00750db69c565bfe25fda2529efcc603f108d820dc6c13
|
3 |
+
size 242917206
|
data/wikibooks/wikibooks.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5984554e9c048e06cd156903a345178dc18a95572e3b12fb4c6e6266bcc87fa5
|
3 |
+
size 11282733
|
data/wikisource/wikisource.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a4ee7ec0bb3147f06617c94a8951055a5a806c7917de229d6b2ec2df9c4c0b73
|
3 |
+
size 9488335
|
paper/paper.md
DELETED
@@ -1,136 +0,0 @@
|
|
1 |
-
# Danish DynaWord: Moving from one-shot dataset Continously developed datasets
|
2 |
-
|
3 |
-
Authors:
|
4 |
-
|
5 |
-
This list of authors to be invited for co-authorship
|
6 |
-
|
7 |
-
CHC
|
8 |
-
- Kenneth Enevoldsen
|
9 |
-
- Jan Kostkan
|
10 |
-
- Per
|
11 |
-
- Kristoffer Nielbo
|
12 |
-
- Marton
|
13 |
-
- Martin (gode CI tanker)
|
14 |
-
|
15 |
-
Alexandra:
|
16 |
-
- Dan Nielsen
|
17 |
-
- Rasmus
|
18 |
-
- Peter
|
19 |
-
- Kristian
|
20 |
-
- Torben
|
21 |
-
|
22 |
-
DFM
|
23 |
-
- Bolette Pedersen (eller nogen fra hendes gruppe)
|
24 |
-
- Desmond
|
25 |
-
- Peter
|
26 |
-
|
27 |
-
Danish Royal Library? Other organization that are important to include?
|
28 |
-
|
29 |
-
# Abstract
|
30 |
-
|
31 |
-
In this work we introduce dynaword an argument for moving toward continously developed dataset as opposed to current release and forget datasets.
|
32 |
-
As an example we release Danish DynaWord
|
33 |
-
|
34 |
-
dataset is available at: LINK
|
35 |
-
|
36 |
-
# Introduction
|
37 |
-
|
38 |
-
Current datasets
|
39 |
-
While creating a current
|
40 |
-
|
41 |
-
Current methods for dataset creation tacke only a small [@joshiStateFateLinguistic2020]
|
42 |
-
In the project we specifically choose to focus on the low to mid-resource language Danish (dan). We see two reasons for doing this:
|
43 |
-
|
44 |
-
- The dynaword approach is most likely to be beneficial for low to mid resourced languages (class 2-4; @joshiStateFateLinguistic2020) which have contributors able and willing to contribute and where the domain high resource languages (class 5; @joshiStateFateLinguistic2020) could likely sustain multiple dynaword project targeting specific domains.
|
45 |
-
- not only for Danish b
|
46 |
-
|
47 |
-
While it is in theory possible to open a PR on existing dataset, this practice is often rare and instead we often see improvements on the existing dataset published (see e.g. [@pascal_alie_kenneth_et_paper], [@that_guy_that_added_langauge_tag_to_a_dataset]). These derivative works rarely get as many downloads as the original
|
48 |
-
|
49 |
-
Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent.
|
50 |
-
|
51 |
-
## Related work
|
52 |
-
|
53 |
-
|
54 |
-
### Existing approaches in Dataset development
|
55 |
-
|
56 |
-
Large project like OSCAR [@OSCAR], HPLT [@hplt], and fineweb [@fineweb] release iterative version of dataset derived from commoncrawl [@commoncrawl].
|
57 |
-
These approaches make it hard to contributors to join contribute and siloes dataset development in a few institutions. Furthermore the focus
|
58 |
-
commoncrawl ignores other valuable resources such as public APIs and comes with a slew of ethical and legal concerns [@missing] which effect only the usefulness of the datasets but also the models derived from these.
|
59 |
-
While these resources such as individual dataset derived from APIs would be extensive to collect for individual groups as they rarely offer enough data to be worth the time opening up this approach to a community makes these approaches more viable.
|
60 |
-
|
61 |
-
|
62 |
-
Opening up development pipeline also increases openness around the dataset collection. ADD SOMETHING on inclusion here.
|
63 |
-
|
64 |
-
Read up on fineweb!!! (I assume they do some CI)
|
65 |
-
|
66 |
-
Other successful open-source project: dependency treebank project [@dep_treebank], ...
|
67 |
-
|
68 |
-
Existing projects on open-licensed data [@elutherAI]
|
69 |
-
|
70 |
-
We note that our approach is complementary to existing projects such as fineweb
|
71 |
-
|
72 |
-
### Continuous Integration
|
73 |
-
|
74 |
-
Do we need a section on this?
|
75 |
-
|
76 |
-
### Danish and Scandinavian Datasets
|
77 |
-
|
78 |
-
Lacunae of danish [@cite]
|
79 |
-
Danish gigaword [@dagw]
|
80 |
-
Swedish gigaword? [@swedish]
|
81 |
-
NCC [@ncc_kummervold]
|
82 |
-
|
83 |
-
|
84 |
-
Existing benchmark covering Scandinavian languages such as ScandEval [@scandeval; @scandeval2] and SEB [@seb] argue that reasonable to evalaute on the
|
85 |
-
|
86 |
-
# Methods
|
87 |
-
|
88 |
-
## Continuous Integration
|
89 |
-
|
90 |
-
Our approach for continuous integration, how to submit, what we test for.
|
91 |
-
|
92 |
-
|
93 |
-
# Results
|
94 |
-
|
95 |
-
## Dataset collection
|
96 |
-
|
97 |
-
Current collection.
|
98 |
-
|
99 |
-
| Source | Date | Domain | License | Size |
|
100 |
-
| --------------- | ---------- | -------------- | ------- | ---- |
|
101 |
-
| **Legal** | | | | |
|
102 |
-
| Retsinformation | date range | Legal, Written | | 188M |
|
103 |
-
| ... | | | | |
|
104 |
-
| **Total** | | | | |
|
105 |
-
|
106 |
-
|
107 |
-
For a description of each dataset we refer to the public repository.
|
108 |
-
<!-- we could also include -->
|
109 |
-
|
110 |
-
# Conclusion
|
111 |
-
|
112 |
-
## Dataset delivery
|
113 |
-
|
114 |
-
# Limitation
|
115 |
-
|
116 |
-
- Is danish too limited: Should we consider multilingual sources, scandinavian, germanic, English
|
117 |
-
|
118 |
-
- Size:
|
119 |
-
- The size is currently limited if the size grows to large developing becomes problematic
|
120 |
-
- This is still way smaller than what could be extracted from CC
|
121 |
-
|
122 |
-
- Only Danish: While developing CI for datasets is by no means new [@missing] doing so for open pre-training datasets open a collaborative fashion has
|
123 |
-
not been tested on a larger scale. Once the approach has been validated we plan to host a collaboration along with huggingface to develop these dataset sources.
|
124 |
-
|
125 |
-
- Huggingface datasets as a development platform for datasets: Througout this work it was clear to many of the developers that the ease of contributing minor changes (e.g. filtering out a few bad examples) was both hard to create a PRs for and hard to review often requiring the reviewer to simply trust that the user did what was stated in the commit message. While previous projects have tackled this issue using human readable formats [@dep_treebank], due to the scope of the dataset this would quickly become inefficient.
|
126 |
-
This lack of clarity increased the likelihood of dataset attacks such as dataset poisoning [@missing]. We expect to see both interface development and software development to detect and prevent such attacks.
|
127 |
-
|
128 |
-
- Machine generated content within training data: Not
|
129 |
-
|
130 |
-
|
131 |
-
Ethical and Environmental consideration
|
132 |
-
|
133 |
-
enviromental:
|
134 |
-
- common codebase lead to less duplication of dataset and reduces storage required
|
135 |
-
- continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.
|
136 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
paper/references.bib
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
|
2 |
-
@article{joshiStateFateLinguistic2021,
|
3 |
-
title = {The {State} and {Fate} of {Linguistic} {Diversity} and {Inclusion} in the {NLP} {World}},
|
4 |
-
url = {http://arxiv.org/abs/2004.09095},
|
5 |
-
abstract = {Language technologies contribute to promoting multilingualism and linguistic diversity around the world. However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications. In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time. Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the "language agnostic" status of current models and systems. Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind.},
|
6 |
-
urldate = {2021-03-20},
|
7 |
-
journal = {arXiv:2004.09095 [cs]},
|
8 |
-
author = {Joshi, Pratik and Santy, Sebastin and Budhiraja, Amar and Bali, Kalika and Choudhury, Monojit},
|
9 |
-
month = jan,
|
10 |
-
year = {2021},
|
11 |
-
note = {arXiv: 2004.09095},
|
12 |
-
keywords = {Computer Science - Computation and Language},
|
13 |
-
}
|
14 |
-
|
15 |
-
@inproceedings{dagw,
|
16 |
-
title = {The {{Danish Gigaword}} Corpus},
|
17 |
-
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics ({{NoDaLiDa}})},
|
18 |
-
author = {{Str{\o}mberg-Derczynski}, Leon and Ciosici, Manuel and Baglini, Rebekah and Christiansen, Morten H. and Dalsgaard, Jacob Aarup and Fusaroli, Riccardo and Henrichsen, Peter Juel and Hvingelby, Rasmus and Kirkedal, Andreas and Kjeldsen, Alex Speed and Ladefoged, Claus and Nielsen, Finn Aarup and Madsen, Jens and Petersen, Malte Lau and Rystr{\o}m, Jonathan Hvithamar and Varab, Daniel},
|
19 |
-
year = {05 31--2 06 2021},
|
20 |
-
pages = {413--421},
|
21 |
-
publisher = {Link{\"o}ping University Electronic Press, Sweden},
|
22 |
-
address = {Reykjavik, Iceland (Online)},
|
23 |
-
abstract = {Danish language technology has been hindered by a lack of broad-coverage corpora at the scale modern NLP prefers. This paper describes the Danish Gigaword Corpus, the result of a focused effort to provide a diverse and freely-available one billion word corpus of Danish text. The Danish Gigaword corpus covers a wide array of time periods, domains, speakers' socio-economic status, and Danish dialects.},
|
24 |
-
file = {/Users/au561649/Zotero/storage/9B3GVP6D/Derczynski et al. - The Danish Gigaword Corpus.pdf}
|
25 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
pyproject.toml
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
[project]
|
2 |
-
name = "danish-
|
3 |
-
version = "1.0.
|
4 |
-
description = "project code for the danish
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.13"
|
7 |
dependencies = [
|
@@ -12,5 +12,5 @@ dependencies = [
|
|
12 |
"plotnine>=0.14.3",
|
13 |
"pytest>=8.3.4",
|
14 |
"seaborn>=0.13.2",
|
15 |
-
"
|
16 |
]
|
|
|
1 |
[project]
|
2 |
+
name = "danish-gigaword-2"
|
3 |
+
version = "1.0.1"
|
4 |
+
description = "project code for the danish gigaword 2 project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.13"
|
7 |
dependencies = [
|
|
|
12 |
"plotnine>=0.14.3",
|
13 |
"pytest>=8.3.4",
|
14 |
"seaborn>=0.13.2",
|
15 |
+
"toml>=0.10.2",
|
16 |
]
|
scripts/bump_version.py
CHANGED
@@ -1,15 +1,16 @@
|
|
1 |
from packaging.version import Version
|
2 |
from pathlib import Path
|
3 |
|
4 |
-
import
|
|
|
5 |
c_file = Path(__file__)
|
6 |
pyproject = c_file.parent.parent / "pyproject.toml"
|
7 |
|
8 |
|
9 |
with pyproject.open("r") as f:
|
10 |
-
data =
|
11 |
version = Version(data["project"]["version"])
|
12 |
data["project"]["version"] = str(Version(f"{version.major}.{version.minor}.{version.micro + 1}"))
|
13 |
|
14 |
with pyproject.open("w") as f:
|
15 |
-
|
|
|
1 |
from packaging.version import Version
|
2 |
from pathlib import Path
|
3 |
|
4 |
+
import toml
|
5 |
+
|
6 |
c_file = Path(__file__)
|
7 |
pyproject = c_file.parent.parent / "pyproject.toml"
|
8 |
|
9 |
|
10 |
with pyproject.open("r") as f:
|
11 |
+
data = toml.load(f)
|
12 |
version = Version(data["project"]["version"])
|
13 |
data["project"]["version"] = str(Version(f"{version.major}.{version.minor}.{version.micro + 1}"))
|
14 |
|
15 |
with pyproject.open("w") as f:
|
16 |
+
toml.dump(data, f)
|
scripts/load_dataset.py
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import load_dataset
|
2 |
+
|
3 |
+
name = "../." # "danish-foundation-models/danish-gigaword"
|
4 |
+
ds = load_dataset("../.", split = "train")
|
5 |
+
|
6 |
+
ds
|
uv.lock
CHANGED
@@ -198,8 +198,8 @@ wheels = [
|
|
198 |
]
|
199 |
|
200 |
[[package]]
|
201 |
-
name = "danish-
|
202 |
-
version = "1.0.
|
203 |
source = { virtual = "." }
|
204 |
dependencies = [
|
205 |
{ name = "datasets" },
|
@@ -209,7 +209,7 @@ dependencies = [
|
|
209 |
{ name = "plotnine" },
|
210 |
{ name = "pytest" },
|
211 |
{ name = "seaborn" },
|
212 |
-
{ name = "
|
213 |
]
|
214 |
|
215 |
[package.metadata]
|
@@ -221,7 +221,7 @@ requires-dist = [
|
|
221 |
{ name = "plotnine", specifier = ">=0.14.3" },
|
222 |
{ name = "pytest", specifier = ">=8.3.4" },
|
223 |
{ name = "seaborn", specifier = ">=0.13.2" },
|
224 |
-
{ name = "
|
225 |
]
|
226 |
|
227 |
[[package]]
|
@@ -1071,12 +1071,12 @@ wheels = [
|
|
1071 |
]
|
1072 |
|
1073 |
[[package]]
|
1074 |
-
name = "
|
1075 |
-
version = "0.
|
1076 |
source = { registry = "https://pypi.org/simple" }
|
1077 |
-
sdist = { url = "https://files.pythonhosted.org/packages/
|
1078 |
wheels = [
|
1079 |
-
{ url = "https://files.pythonhosted.org/packages/
|
1080 |
]
|
1081 |
|
1082 |
[[package]]
|
|
|
198 |
]
|
199 |
|
200 |
[[package]]
|
201 |
+
name = "danish-gigaword-2"
|
202 |
+
version = "1.0.1"
|
203 |
source = { virtual = "." }
|
204 |
dependencies = [
|
205 |
{ name = "datasets" },
|
|
|
209 |
{ name = "plotnine" },
|
210 |
{ name = "pytest" },
|
211 |
{ name = "seaborn" },
|
212 |
+
{ name = "toml" },
|
213 |
]
|
214 |
|
215 |
[package.metadata]
|
|
|
221 |
{ name = "plotnine", specifier = ">=0.14.3" },
|
222 |
{ name = "pytest", specifier = ">=8.3.4" },
|
223 |
{ name = "seaborn", specifier = ">=0.13.2" },
|
224 |
+
{ name = "toml", specifier = ">=0.10.2" },
|
225 |
]
|
226 |
|
227 |
[[package]]
|
|
|
1071 |
]
|
1072 |
|
1073 |
[[package]]
|
1074 |
+
name = "toml"
|
1075 |
+
version = "0.10.2"
|
1076 |
source = { registry = "https://pypi.org/simple" }
|
1077 |
+
sdist = { url = "https://files.pythonhosted.org/packages/be/ba/1f744cdc819428fc6b5084ec34d9b30660f6f9daaf70eead706e3203ec3c/toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f", size = 22253 }
|
1078 |
wheels = [
|
1079 |
+
{ url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588 },
|
1080 |
]
|
1081 |
|
1082 |
[[package]]
|