Edit model card

What is this?

A fine-tuned GPT-2 model (medium version, ~354.8 M parameters) for generating responses to customer reviews in Danish.

How to use

The model is based on the gpt2-medium-danish model and performs better than the smaller version (gpt2-small-danish-review-response). Supervised fine-tuning is applied to adapt the model to generate responses to customer reviews in Danish. A prompting template is applied to the examples used for training (see the example below).

Test the model using the pipeline from the 🤗 Transformers library:

from transformers import pipeline

generator = pipeline("text-generation", model = "KennethTM/gpt2-medium-danish-review-response")

def prompt_template(user, review):
    return f"### Bruger:\n{user}\n\n### Anmeldelse:\n{review}\n\n### Svar:\nKære {user}\n"

prompt = prompt_template(user = "Anders", review = "Umuligt at komme igennem på telefonen.")

text = generator(prompt)

print(text[0]["generated_text"])

Or load it using the Auto* classes:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-medium-danish-review-response")
model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-medium-danish-review-response")
Downloads last month
4
Safetensors
Model size
355M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.