--- language: - en pipeline_tag: text-generation tags: - not-for-all-audiences - llama - llama-2 - meta - pytorch - transformers - text-generation - storytelling - storywriting - stories - writing --- Join the Coffee & AI Discord for AI Stuff and things! [![Discord](https://img.shields.io/discord/232596713892872193?logo=discord)](https://discord.gg/2JhHVh7CGu) This is a model merge of https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b + https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA Thanks to NousResearch and Meta for the base models. A brief warning, no alignment or attempts of any kind were made to reign in, censor, or otherwise manipulate the outputs of this model. It is a raw model and may produce outputs that are unexpected or otherwise distateful. You are the master of your own destiny, and the master of this model, use with caution. Quantization was 128 Group Size with act order on. This quantization was aimed at Exllama text generation. Nous-Hermes is the base model, so the recommendation is to use their recommended alpaca instruct format for prompts: ``` Prompt Format The model follows the Alpaca prompt format: ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ```