File size: 2,355 Bytes
7745b19
d614cb1
 
7745b19
 
 
d614cb1
 
 
 
 
 
 
 
7745b19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d614cb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1050884
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: cc-by-sa-3.0

config_names:
- Abbreviation equality
- Adjective inflection analogy
- Clinical analogy
- Clinical similarity
- Noun inflection analogy
- UMNSRS relatedness
- UMNSRS similarity
- Verb inflection analogy



#dataset_info:
#- config_name: Abbreviation equality
#  features:
#    - name: train
#      dtype: string

configs:
- config_name: Abbreviation equality
  data_files:
  - split: train
    path: Abbreviation equality/train*

- config_name: Adjective inflection analogy
  data_files:
  - split: train
    path: Adjective inflection analogy/train*

- config_name: Clinical analogy
  data_files:
  - split: train
    path: Clinical analogy/train*

- config_name: Clinical similarity
  data_files:
  - split: train
    path: Clinical similarity/train*

- config_name: Noun inflection analogy
  data_files:
  - split: train
    path: Noun inflection analogy/train*

- config_name: UMNSRS relatedness
  data_files:
  - split: train
    path: UMNSRS relatedness/train*

- config_name: UMNSRS similarity 
  data_files:
  - split: train
    path: UMNSRS similarity/train*

- config_name: Verb inflection analogy
  data_files:
  - split: train
    path: Verb inflection analogy/train*
    
---

# Danish medical word embeddings

MeDa-We was trained on a Danish medical corpus of 123M tokens. The word embeddings are 300-dimensional and are trained using [FastText](https://fasttext.cc/).

The embeddings were trained for 10 epochs using a window size of 5 and 10 negative samples.

The development of the corpus and word embeddings is described further in our [paper](https://aclanthology.org/2023.nodalida-1.31/). 

We also trained a transformer model on the developed corpus which can be found [here](https://huggingface.co/jannikskytt/MeDa-Bert).

### Citing

```
@inproceedings{pedersen-etal-2023-meda,
    title = "{M}e{D}a-{BERT}: A medical {D}anish pretrained transformer model",
    author = "Pedersen, Jannik  and
      Laursen, Martin  and
      Vinholt, Pernille  and
      Savarimuthu, Thiusius Rajeeth",
    booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
    month = may,
    year = "2023",
    address = "T{\'o}rshavn, Faroe Islands",
    publisher = "University of Tartu Library",
    url = "https://aclanthology.org/2023.nodalida-1.31",
    pages = "301--307",
}
```