GPT-2 recycled for Dutch (small, adapted lexical embeddings)

Wietse de VriesMalvina Nissim

Model description

This model is based on the small OpenAI GPT-2 (gpt2) model.

The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.

For details, check out our paper on arXiv and the code on Github.

Related models

Dutch

Italian

How to use

from transformers import pipeline

pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch-embeddings")
from transformers import AutoTokenizer, AutoModel, TFAutoModel

tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings")  # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings")  # Tensorflow

BibTeX entry

@misc{devries2020good,
      title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, 
      author={Wietse de Vries and Malvina Nissim},
      year={2020},
      eprint={2012.05628},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
63
Hosted inference API
Text Generation
This model can be loaded on the Inference API on-demand.