Edit model card

ExllamaV2 version of the model created by dreamgen!

Original Model https://huggingface.co/dreamgen/opus-v0-70b

Requires ExllamaV2, which is being developed by turboderp https://github.com/turboderp/exllamav2 under an MIT license.

Main branch is 5bpw 8h

6b8h is 6bpw and 8h

4.6b8h is 4.6bpw and 8h

2.5b8h is 2.5bpw and 8h


DreamGen Opus V0 70B

DreamGen Opus is a family of uncensored models fine-tuned for (steerable) story writing and the model also works great for chat / RP. The DreamGen Opus V0 70B model is derived from meta-llama/Llama-2-70b-hf.

You can try the Opus V0 70B (AWQ) model for free on dreamgen.com.

Quantized versions:

Other sizes:

Prompting

Please see the official documentation for more detailed guide, including how to prompt the model for chat / RP.

The (collaborative / steerable) story writing task teaches the model to respect <setting> and <instruction> inserted into the prompt.

Example prompt:

<setting>
(Setting provides general overview of the story and characters)
This story is a twist on the traditional Little Red Riding Hood story.
In this variation, the Little Red Riding Hood and her grandma are secretely werevoles.
</setting>

(Previous part of the story, potentially empty)

<instruction>
(Setting tells the model what should happen in the next few sentences / paragraphs)
The Little Red Riding hood confronts The Big Bad Wolf, transforming into her wolf form.
</instruction>

Dataset

The fine-tuning dataset consisted of >1M tokens of collaborative writing task examples, each example being up to 4096 tokens. On top of that, >20M tokens of more general, but less instructed examples were included to help preserve generalization.

Community

Join the DreamGen community on Discord, or follow our X/Twitter account for new model releases and other news. We will soon be releasing models with longer context window, as well as models specifically fine-tuned for character chat & roleplay.

Help us shape the future of DreamGen.

Running the model

The model is should be compatible with any software that supports meta-llama/Llama-2-70b-hf. Note that because this is a 70B model, the resource requirements are large. You can try the quantized versions linked at the top, but expect a quality drop.

Running on DreamGen.com (free)

You can try the 70B (AWQ) model for free at dreamgen.com — note that an account is required. The version used for the website is the official AWQ 4bit quant dreamgen/opus-v0-70b-awq.

License

Downloads last month
0