The trained model is a finetuned GPT2 text generation model that takes a positive prompt as an input, and outputs a negative prompt that is supposed to match the input prompt.

However, the results are very random and they are mostly unrelated to the prompt, and sometime they even output a positive prompt.

As the results are not good, I have not yet cleaned up the project and made it presentable.

Use this mostly for your own curiosity or experimentation.

Github Project

https://github.com/MNeMoNiCuZ/NegativePromptGenerator

Downloads last month
22
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.