What is this?

A fine-tuned GPT-2 model (small version, 124 M parameters) for generating responses to customer reviews in Danish.

How to use

The model is based on the gpt2-small-danish model. Supervised fine-tuning is applied to adapt the model to generate responses to customer reviews in Danish. A prompting template is applied to the examples used to train (see the example below).

Test the model using the pipeline from the 🤗 Transformers library:

from transformers import pipeline

generator = pipeline("text-generation", model = "KennethTM/gpt2-small-danish-review-response")

def prompt_template(user, review):
    return f"### Bruger:\n{user}\n\n### Anmeldelse:\n{review}\n\n### Svar:\nKære {user}\n"

prompt = prompt_template(user = "Anders", review = "Umuligt at komme igennem på telefonen.")

text = generator(prompt)

print(text[0]["generated_text"])

Or load it using the Auto* classes:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-small-danish-review-response")
model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-small-danish-review-response")

Notes

The model may get the sentiment of the review wrong resulting in a mismatch between the review and response. The model would probably benefit from sentiment tuning.

Downloads last month
16
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.