Request: [out-of-scope]

#14
by ClaudioItaly - opened

[Required] Model name:
00005-1079493375.jpeg

[Required] Model link:
1- froggeric/WestLake-10.7B-v2
2- mlabonne/UltraMerge-7B

[Required] Brief description:
high ability of dialogue, suggestions and writing

[Required] An image/direct image link to represent the model (square shaped):

[Optional] Additonal quants (if you want any):

Default list of quants for reference:

    "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
    "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"

@ClaudioItaly

This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements bellow.

Just so you know request rules have been updated, you can no longer request merges.

If you want to try froggeric/WestLake-10.7B-v2 it has already been quantized.

Also this merge wouldn't work as froggeric/WestLake-10.7B-v2 is a 10.7B parameter model and mlabonne/UltraMerge-7B is a 7B parameter model.

If you still want to do this then you can check out this colab notebook.

If you proceed on your own DO NOT USE froggeric/WestLake-10.7B-v2 instead use senseable/WestLake-7B-v2.

Here is an example for SLERP:

slices:
  - sources:
      - model: senseable/WestLake-7B-v2
        layer_range: [0, 32]
      - model: mlabonne/UltraMerge-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: senseable/WestLake-7B-v2
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

[I have updated the ReadMe, merge-requests aren't part of this card, this is just meant for Quants of existing models that match the requirements.]

Lewdiculous changed discussion status to closed
Lewdiculous changed discussion title from Request: Liguriam 7B to Request: [out-of-scope]

Ok

Sign up or log in to comment