metadata
license: openrail
pipeline_tag: text-generation
library_name: transformers
widget:
- text: a photograph of
example_title: photo
- text: a bizarre cg render
example_title: render
- text: the spaghetti
example_title: meal?
- text: a (detailed+ intricate)+ picture
example_title: weights
inference:
parameters:
temperature: 2.4
max_new_tokens: 200
A model based upon the prompts of all the images in my InvokeAI's output directory, meant to be used with InvokeAI (a Stable Diffusion implementation/UI) to generate new, probably wild nightmare images.
This is mostly trained on positive prompts, though you may catch some words in [] brackets, which will be treated as negative. GPT-Neo is usually quite good at pairing parenthesis, quotation marks, etc - however, don't be too surprised if it generates something that's note quite InvokeAI prompt syntax.
To use this model, you can import it as a pipeline like so:
from transformers import pipeline
generator = pipeline(model="cactusfriend/nightmare-invokeai-prompts",
tokenizer="cactusfriend/nightmare-invokeai-prompts",
task="text-generation")
Here's an example function that'll generate by default 20 prompts, at a temperature of 1.8 which seems good for this model.
def makePrompts(prompt: str, *, p: float=0.9,
k: int = 40, num: int = 20,
temp: float = 1.8, mnt: int = 150):
outputs = generator(prompt, max_new_tokens=mnt,
temperature=temp, do_sample=True,
top_p=p, top_k=k, num_return_sequences=num)
items = set([i['generated_text'] for i in outputs])
print("-" * 60)
print("\n ---\n".join(items))
print("-" * 60)
Then, you can call it like so:
makePrompts("a photograph of")
# or, to change some defaults:
makePrompts("spaghetti all over", temp=1.4, p=0.92, k=45)