Lewdiculous's picture
Update README.md
600f094 verified
|
raw
history blame
No virus
3.74 kB
---
license: other
language:
- en
---
My GGUF-IQ-Imatrix quants for [**Nitral-AI/Hathor-L3-8B-v.01**](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01).
This model might still be a bit experimental.
"Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance."
> [!IMPORTANT]
> **Quantization process:** <br>
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
> Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
> This was a bit more disk and compute intensive but hopefully avoided any losses during conversion. <br>
> If you noticed any issues let me know in the discussions.
> [!NOTE]
> **General usage:** <br>
> Use the [**latest version of KoboldCpp**](https://github.com/LostRuins/koboldcpp/releases/latest). <br>
> Remember that you can also use `--flashattention` on KoboldCpp now even with non-RTX cards for reduced VRAM usage. <br>
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. <br>
> For **12GB VRAM** GPUs, the **Q5_K_M-imat** quant will give you a great size/quality balance. <br>
>
> **Resources:** <br>
> You can find out more about how each quant stacks up against each other and their types [**here**](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [**here**](https://rentry.org/llama-cpp-quants-or-fine-ill-do-it-myself-then-pt-2), respectively.
>
> **Presets:** <br>
> Some compatible SillyTavern presets can be found [**here (Hathor Presets)**](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01/tree/main/Hathor%20Presets) or [**here (Virt's Roleplay Presets)**](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
<!-- > Check [**discussions such as this one**](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/5#664d6fb87c563d4d95151baa) for other recommendations and samplers.
-->
> [!TIP]
> **Personal-support:** <br>
> I apologize for disrupting your experience. <br>
> Currently I'm working on moving for a better internet provider. <br>
> If you **want** and you are **able to**... <br>
> You can [**spare some change over here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
>
> **Author-support:** <br>
> You can support the author [**at their own page**](https://huggingface.co/Nitral-AI).
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FLvA7-CWp3UhBuR2eGSh7.webp)
## **Original model text information:**
# "Hathor-v0.1 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction."
# Recomended ST Presets: [Hathor Presets](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01/tree/main/Hathor%20Presets)
---
# Notes: Hathor is trained on 3 epochs of private rp data, synthetic opus instructons, a mix of light/classical novel data. (Heavily wip)
---
- If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
- To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
* You can load the **mmproj** by using the corresponding section in the interface:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)