Toppy M 7B - ExLlama V2
Original model: Toppy M 7B
Description
This is an EXL2 quantization of the Undi95's Toppy M 7B model.
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
Quantizations
Bits Per Weight | Size |
---|---|
main (2.4bpw) | 2.29 GB |
3bpw | 2.78 GB |
3.5bpw | 3.19 GB |
4bpw | 3.59 GB |
4.5bpw | 4.00 GB |
5bpw | 4.41 GB |
6bpw | 5.22 GB |
8bpw | 6.84 GB |
Original model card: Carsten Kragelund's Chronomaid Storytelling 13B
Description
This repo contains fp16 files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit.
This project was a request from BlueNipples : link
Models and loras used
- openchat/openchat_3.5
- NousResearch/Nous-Capybara-7B-V1.9
- HuggingFaceH4/zephyr-7b-beta
- lemonilia/AshhLimaRP-Mistral-7B
- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b
- Undi95/Mistral-pippa-sharegpt-7b-qlora
The sauce
openchat/openchat_3.5
lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38
NousResearch/Nous-Capybara-7B-V1.9
Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27
HuggingFaceH4/zephyr-7b-beta
Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: mistralai/Mistral-7B-v0.1
- model: Undi95/zephyr-7b-beta-pippa-sharegpt
parameters:
weight: 0.42
- model: Undi95/Nous-Capybara-7B-V1.9-120-Days
parameters:
weight: 0.29
- model: Undi95/openchat_3.5-LimaRP-13B
parameters:
weight: 0.48
dtype: bfloat16
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
If you want to support me, you can here.
- Downloads last month
- 21
Model tree for LogicismTV/Toppy-M-7B-exl2
Base model
Undi95/Toppy-M-7B