|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
language: |
|
- da |
|
tags: |
|
- pretrained |
|
inference: |
|
parameters: |
|
temperature: 0.7 |
|
datasets: |
|
- DDSC/partial-danish-gigaword-no-twitter |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
--- |
|
|
|
# Model Card for Munin 7B Alpha |
|
|
|
The Munin 7B Alpha Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters, based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). |
|
|
|
It has been trained on [Danish Gigaword](https://gigaword.dk/) using [continual pretraining](https://doi.org/10.48550/arXiv.2308.04014). |
|
|
|
For full details of this model please read our [release blog post](not-available-yet). |
|
|
|
|
|
## Notice |
|
|
|
Munin 7B Alpha is, like Mistral 7B, a pretrained base model and therefore does not have any moderation mechanisms. |
|
|
|
|
|
## The Danish Foundation Models Team |
|
|
|
- From [the Center for Humanities Computing at Aarhus University](https://chc.au.dk/): |
|
- Kenneth Enevoldsen (kenneth.enevoldsen@cas.au.dk) |
|
- Lasse Hansen (lasse.hansen@clin.au.dk) |
|
- Kristoffer Laigaard Nielbo (kln@cas.au.dk) |
|
- From [the Alexandra Institute](https://alexandra.dk/): |
|
- Peter Bjørn Jørgensen (peter.jorgensen@alexandra.dk) |
|
- Rasmus Larsen (rasmus.larsen@alexandra.dk) |
|
- Dan Saattrup Nielsen (dan.nielsen@alexandra.dk) |
|
|
|
|
|
## With Support From |
|
|
|
[Danish e-infrastructure Consortium](https://deic.dk/) and [the Danish Defence](https://www.forsvaret.dk/). |