Edit model card

merge

This is a testing model using the zeroing method used by L3-Aethora-15B-V2.

If this model pans out in the way I hope, Ill expand on it more and add a custom model card like the others. currently this is just an experiment.

In case anyone asks NeMoria-21b literally means:

NeMo = Mistral-Nemo (Instruct)
21b = its 21b perameters

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 30]
    model: unsloth/Mistral-Nemo-Instruct-2407
- sources:
  - layer_range: [16, 32]
    model: unsloth/Mistral-Nemo-Instruct-2407
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [16, 32]
    model: unsloth/Mistral-Nemo-Instruct-2407
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [32, 40]
    model: unsloth/Mistral-Nemo-Instruct-2407
Downloads last month
770
Safetensors
Model size
20.4B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TheSkullery/NeMoria-21b

Finetuned
this model
Quantizations
4 models