|
--- |
|
base_model: |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- rAIfle/Acolyte-LORA |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# Acolyte-22B |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png) |
|
|
|
LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. |
|
Check the [LoRA](https://huggingface.co/rAIfle/Acolyte-LORA) for dataset info. |
|
|
|
Use `Mistral V2 & V3` template. |
|
|
|
## Quants |
|
|
|
- [iMat GGUFs](https://huggingface.co/Quant-Cartel/Acolyte-22B-iMat-GGUF) |
|
- [exl2 longcals](https://huggingface.co/Quant-Cartel/Acolyte-22B-exl2-longcal) |