--- license: openrail pipeline_tag: text-generation --- A model based upon the prompts of all the images in my InvokeAI's output directory. Mostly only positive prompts, though you may catch some words in [] brackets. Note: the prompts are very chaotic; a good way to stress test a model, perhaps? To use this model, you can import it as a pipeline like so: ```py from transformers import pipeline generator = pipeline(model="cactusfriend/nightmare-invokeai-prompts", tokenizer="cactusfriend/nightmare-invokeai-prompts", task="text-generation") ``` Here's an example function that'll generate by default 20 prompts, at a temperature of 1.8 which seems good for this model. ```py def makePrompts(prompt: str, *, p: float=0.9, k: int = 40, num: int = 20, temp: float = 1.8, mnt: int = 150): outputs = generator(prompt, max_new_tokens=mnt, temperature=temp, do_sample=True, top_p=p, top_k=k, num_return_sequences=num) items = set([i['generated_text'] for i in outputs]) print("-" * 60) print("\n".join(items)) print("-" * 60) ``` Then, you can call it like so: ```py makePrompts("a photograph of") # or, to change some defaults: makePrompts("spaghetti all over", temp=1.4, p=0.92, k=45) ```