KennethTM commited on
Commit
4c883e0
1 Parent(s): ad95378

Create README.md

Browse files

# What is this?

A GPT-2 model (small version, ~354 M parameters) for Danish text generation. The model was not pre-trained from scratch but adapted from the English version using [CLP-Transfer](https://arxiv.org/abs/2301.09626).

# How to use

Test the model using the pipeline from the [🤗 Transformers](https://github.com/huggingface/transformers) library:

```python
from transformers import pipeline

generator = pipeline("text-generation", model = "KennethTM/gpt2-medium-danish")
text = generator("Manden arbejdede som")

print(text[0]["generated_text"])
```

Or load it using the Auto* classes:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-medium-danish")
model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-medium-danish")
```

# Model training

The model is trained using the Danish part of the [oscar dataset](https://huggingface.co/datasets/oscar) ('unshuffled_deduplicated_da') and a context length of 1024 tokens.

The model is initialized from the English [GPT-2 medium model](https://huggingface.co/gpt2-medium) ('source model') with new word token embeddings created from the Danish [GPT-2 small model](https://huggingface.co/KennethTM/gpt2-small-danish) ('helper model') using the [CLP-Transfer method](https://github.com/malteos/clp-transfer).

The whole model is trained using ~1.000.000 samples.

For reference, the model achieves a perplexity of 24.7 on 5.000 random validation samples.

The model is trained on an 8 GB GPU.

# Notes

This is a pre-trained model, for optimal performance it should be finetuned for new tasks.

Files changed (1) hide show
  1. README.md +8 -0
README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - oscar
4
+ language:
5
+ - da
6
+ widget:
7
+ - text: Der var engang
8
+ ---