Merge:

layer_slices:
  - model: Gryphe/MythoMax-L2-13b
    start: 0
    end: 16
  - model: Undi95/MM-ReMM-L2-20B-Part1
    start: 8
    end: 20
  - model: Gryphe/MythoMax-L2-13b
    start: 17
    end: 32
  - model: Undi95/MM-ReMM-L2-20B-Part1
    start: 21
    end: 40

Models used

  • Gryphe/MythoMax-L2-13b
  • Undi95/ReMM-v2.1-L2-13B

Part1 = ReMM v2.1 merged /w MythoMax low weight to keep consistency. I call this "dilution" and result show consistency and coherency without repeat/loop beside the small amount of duplicated datas.

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that completes the request.

### Instruction:
{prompt}

### Response:
Downloads last month
14
GGUF
Model size
20B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .