--- license: other tags: - chatbot - gptq - storywriting --- # chronos-13b-8K-4bit The original Chronos-13B model was merged with a LoRA trained on a majority of 1500 samples in the 8000 token range in the same style, with a cutoff of 8k tokens in full 8bit. It is meant to be used standalone, but if you would like the LoRA to merge/combine on your own, you can find it here https://huggingface.co/ZeusLabs/chronos-13b-8k-lora The `config.json` includes modifications allowing extended context so you will need to use it with `trust_remote_code` if not using Exllama. 4bit (int4) quantized version using `true-sequential` and `groupsize 128` of https://huggingface.co/elinas/chronos-13b plus https://huggingface.co/ZeusLabs/chronos-13b-8k-lora This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding. Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on. This model uses Alpaca formatting, so for optimal model performance, use: ``` ### Instruction: Your instruction or question here. ### Response: ``` [Zeus Labs Discord](https://discord.gg/76e2HBzRKD)