Edit model card

Description

This repo contains fp16 files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit.

This project was a request from BlueNipples : link

Models and loras used

The sauce

openchat/openchat_3.5
lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38

NousResearch/Nous-Capybara-7B-V1.9
Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27

HuggingFaceH4/zephyr-7b-beta
Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38

merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
  - model: mistralai/Mistral-7B-v0.1
  - model: Undi95/zephyr-7b-beta-pippa-sharegpt
    parameters:
      weight: 0.42
  - model: Undi95/Nous-Capybara-7B-V1.9-120-Days
    parameters:
      weight: 0.29
  - model: Undi95/openchat_3.5-LimaRP-13B
    parameters:
      weight: 0.48
dtype: bfloat16

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

If you want to support me, you can here.

Downloads last month
1,349
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·

Spaces using Undi95/Toppy-M-7B 4

Collection including Undi95/Toppy-M-7B