--- license: other library_name: transformers tags: - mergekit - merge base_model: jukofyork/Dark-Miqu-70B model_name: Dark-Miqu-70B-GGUF quantized_by: brooketh --- **
The official library of GGUF format models for use in the local AI chat app, Faraday.dev.
** *** # Dark Miqu 70B - **Creator:** [jukofyork](https://huggingface.co/jukofyork/) - **Original:** [Dark Miqu 70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) - **Date Created:** 2024-05-04 - **Trained Context:** 32764 tokens - **Description:** A "dark" creative writing model with 32k context. Based off miqu-1-70b but with greatly reduced "positivity" and "-isms". Excels at writing Dark/Grimdark fantasy. ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Faraday.dev. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** ## Faraday.dev - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Faraday makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Faraday supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***