--- datasets: - oscar language: - da widget: - text: Der var engang --- # What is this? A GPT-2 model (medium version, ~354.8 M parameters) for Danish text generation. The model was not pre-trained from scratch but adapted from the English version using [CLP-Transfer](https://arxiv.org/abs/2301.09626). # How to use Test the model using the pipeline from the [🤗 Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import pipeline generator = pipeline("text-generation", model = "KennethTM/gpt2-medium-danish") text = generator("Manden arbejdede som") print(text[0]["generated_text"]) ``` Or load it using the Auto* classes: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-medium-danish") model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-medium-danish") ``` # Model training The training data are the Danish part of the [oscar dataset](https://huggingface.co/datasets/oscar) ('unshuffled_deduplicated_da') and a context length of 1024 tokens. The model weights are initialized from the English [GPT-2 medium model](https://huggingface.co/gpt2-medium) ('source model') with new word token embeddings created from the Danish [GPT-2 small model](https://huggingface.co/KennethTM/gpt2-small-danish) ('helper model') using the [CLP-Transfer method](https://github.com/malteos/clp-transfer). The model is trained using ~1.000.000 samples. For reference, the model achieves a perplexity of 24.7 on 5.000 random validation samples. The model is trained on an 8 GB GPU. # Notes This is a pre-trained model, for optimal performance it should be finetuned for new tasks.