Following , the following constitutes a model for this model.
Organization developing the Model: The Danish Foundation Models project
Model Creation Date: June 2022
Model Type: Transformer encoder model ; BERT 
Feedback on the Model: For feedback on the model please use the community forum.
Training logs and performance metrics: Check out this Weight and biases Dashboard.
Primary Intended Uses:
The primary intended use case of this model is the reproduction and validation of dataset quality. The intended use cases for future iterations of this model are the application in industry and research for Danish natural language tasks.
Primary Intended Users:
Future iterations of the model are intended for NLP practitioners dealing with Danish text documents.
Use of the model for profiling in a way which is inconsiderate of the potential harm it might cause, such as racial profiling.
Card prompts - Relevant Factors:
Relevant factors include which language is used. Our model is trained on a Danish text corpus and is intended to compare the training data.
Card prompts - Evaluation Factors:
Future iterations of this model should include a validation of biases pertaining to gender, race, and religious and social groups.
Our model is evaluated on the following performance metrics:
- Pseudo perplexity, following , across eight distinct domains, including Danish dialects, books, legal, social media (Reddit, Twitter), spontaneous speech, news and Wikipedia.
- The Danish subsection of Scandeval .
To see the performance metrics, check out this Weight and biases Dashboard.
Approaches to Uncertainty and Variability:
Due to the cost of training the model is only pre-trained once, but the ScandEval fine-tunes ten times to obtain a reasonable estimate of model performance.
The ScandEval's Danish benchmark includes:
- Named entity recognition on DaNE [7,8].
- Part-of-speech tagging and dependency on DDT .
- Sentiment classification on AngryTweets , TwitterSent , Europarl , LCC 
- Hate speech classification on DKHate .
The ScandEval benchmark is the most comprehensive benchmark for Danish. Pseudo perplexity was analysed to examine the model's ability to model certain language domains.
For our training data, we sample from HopeTwitter, DaNews, DAGW and Netarkivet Text (NAT) with the probabilites; 0.10, 0.10, 0.10, 0.70. For more information on the training and datasets, see the respective datasheets on the Danish foundation models GitHub page.
Input documents are tokenized using the tokenizer of the Danish BERT by BotXO , which uses a BPE with a vocabulary size of ~30,000 and NFKC normalization.
Data: The is sources from News, DAGW, Twitter, and Netarkivet Text (NAT) and might thus contain hate-speech, sexually explicit content and otherwise harmful content.
Mitigations: We considered removing sexually explicit content by filtering web domians using a DNS or using google safe-search. However, examining the filtering domains these were also found to include news media pertaining to a specific demographic (e.g. Dagens.dk) and educational sites pertaining to sexual education. We also examined the use of word-based filters, but found that might influence certain demographic groups disproportionally.
Risk and Harms: As Netarkivet Text cover such a wide array of the Danish internet it undoubtably contains personal information. To avoid model memorization of this information we have deduplicated the data such that the model does not learn this information.
-  Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596
-  Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need. ArXiv:1706.03762 [Cs]. http://arxiv.org/abs/1706.03762
-  Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
-  Salazar, J., Liang, D., Nguyen, T. Q., & Kirchhoff, K. (2020). Masked Language Model Scoring. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2699–2712. https://doi.org/10.18653/v1/2020.acl-main.240
-  Nielsen, D. S. (2021). ScandEval: Evaluation of language models on mono- or multilingual Scandinavian language tasks. GitHub. Note: Https://Github.Com/Saattrupdan/ScandEval.
-  Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A named entity resource for danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604.
-  Kromann, M. T. (2003). The Danish Dependency Treebank and the DTAG Treebank Tool. https://research.cbs.dk/en/publications/the-danish-dependency-treebank-and-the-dtag-treebank-tool
-  Alexandrainst/danlp. (2022). Alexandra Institute. https://github.com/alexandrainst/danlp/blob/a1e9fa70fc5a3ae7ff78877062da3a8a8da80758/docs/docs/datasets.md (Original work published 2019)
-  Nielsen, F. Å. (2022). Lcc-sentiment. https://github.com/fnielsen/lcc-sentiment (Original work published 2016)
-  Sigurbergsson, G. I., & Derczynski, L. (2020). Offensive Language and Hate Speech Detection for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 3498–3508. https://aclanthology.org/2020.lrec-1.430
-  Møllerhøj, J. D. (2019, December 5). Danish BERT model: BotXO has trained the most advanced BERT model. BotXO. https://www.botxo.ai/blog/danish-bert-model/
- Downloads last month
Inference API does not yet support jax models for this pipeline type.