Edit model card

A model based upon the prompts of all the images in my InvokeAI's output directory, meant to be used with InvokeAI (a Stable Diffusion implementation/UI) to generate new, probably wild nightmare images.

This is mostly trained on positive prompts, though you may catch some words in [] brackets, which will be treated as negative. GPT-Neo is usually quite good at pairing parenthesis, quotation marks, etc - however, don't be too surprised if it generates something that's not quite InvokeAI prompt syntax.

To use this model, you can import it as a pipeline like so:

from transformers import pipeline

generator = pipeline(model="cactusfriend/nightmare-invokeai-prompts",
                    tokenizer="cactusfriend/nightmare-invokeai-prompts",
                    task="text-generation")

Here's an example function that'll generate by default 20 prompts, at a temperature of 1.8 which seems good for this model.

def makePrompts(prompt: str, *, p: float=0.9,
                k: int = 40, num: int = 20,
                temp: float = 1.8, mnt: int = 150):
    outputs = generator(prompt, max_new_tokens=mnt,
                        temperature=temp, do_sample=True,
                        top_p=p, top_k=k, num_return_sequences=num)
    items = set([i['generated_text'] for i in outputs])
    print("-" * 60)
    print("\n ---\n".join(items))
    print("-" * 60)

Then, you can call it like so:

makePrompts("a photograph of")
# or, to change some defaults:
makePrompts("spaghetti all over", temp=1.4, p=0.92, k=45)
Downloads last month
67
Safetensors
Model size
176M params
Tensor type
F32
ยท
BOOL
ยท
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using cactusfriend/nightmare-invokeai-prompts 2