MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear

This is the model for MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear. I used mergekit to merge models.

Yaml Config

models:
  - model: meta-math/MetaMath-Mistral-7B
    parameters:
      weight: 0.5
  - model: Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
    parameters:
      weight: 0.3
merge_method: linear
dtype: float16
Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear